2026-03-25 01:41:30.945391 | Job console starting 2026-03-25 01:41:30.955167 | Updating git repos 2026-03-25 01:41:31.018294 | Cloning repos into workspace 2026-03-25 01:41:31.285903 | Restoring repo states 2026-03-25 01:41:31.303641 | Merging changes 2026-03-25 01:41:31.303669 | Checking out repos 2026-03-25 01:41:31.564157 | Preparing playbooks 2026-03-25 01:41:32.227010 | Running Ansible setup 2026-03-25 01:41:36.873762 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-25 01:41:37.630738 | 2026-03-25 01:41:37.630953 | PLAY [Base pre] 2026-03-25 01:41:37.648100 | 2026-03-25 01:41:37.648232 | TASK [Setup log path fact] 2026-03-25 01:41:37.668422 | orchestrator | ok 2026-03-25 01:41:37.685918 | 2026-03-25 01:41:37.686059 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-25 01:41:37.732892 | orchestrator | ok 2026-03-25 01:41:37.748252 | 2026-03-25 01:41:37.748377 | TASK [emit-job-header : Print job information] 2026-03-25 01:41:37.804014 | # Job Information 2026-03-25 01:41:37.804303 | Ansible Version: 2.16.14 2026-03-25 01:41:37.804361 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-03-25 01:41:37.804415 | Pipeline: periodic-midnight 2026-03-25 01:41:37.804454 | Executor: 521e9411259a 2026-03-25 01:41:37.804488 | Triggered by: https://github.com/osism/testbed 2026-03-25 01:41:37.804523 | Event ID: 5d54fc4521a34f22930df354862d1c17 2026-03-25 01:41:37.813581 | 2026-03-25 01:41:37.813761 | LOOP [emit-job-header : Print node information] 2026-03-25 01:41:37.953350 | orchestrator | ok: 2026-03-25 01:41:37.953642 | orchestrator | # Node Information 2026-03-25 01:41:37.953719 | orchestrator | Inventory Hostname: orchestrator 2026-03-25 01:41:37.953764 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-25 01:41:37.953803 | orchestrator | Username: zuul-testbed03 2026-03-25 01:41:37.953837 | orchestrator | Distro: Debian 12.13 2026-03-25 01:41:37.953877 | orchestrator | Provider: static-testbed 2026-03-25 01:41:37.953913 | orchestrator | Region: 2026-03-25 01:41:37.953950 | orchestrator | Label: testbed-orchestrator 2026-03-25 01:41:37.953983 | orchestrator | Product Name: OpenStack Nova 2026-03-25 01:41:37.954016 | orchestrator | Interface IP: 81.163.193.140 2026-03-25 01:41:37.982769 | 2026-03-25 01:41:37.982992 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-25 01:41:38.474372 | orchestrator -> localhost | changed 2026-03-25 01:41:38.489383 | 2026-03-25 01:41:38.489535 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-25 01:41:39.571977 | orchestrator -> localhost | changed 2026-03-25 01:41:39.595648 | 2026-03-25 01:41:39.595801 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-25 01:41:39.885521 | orchestrator -> localhost | ok 2026-03-25 01:41:39.900743 | 2026-03-25 01:41:39.900930 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-25 01:41:39.938327 | orchestrator | ok 2026-03-25 01:41:39.959013 | orchestrator | included: /var/lib/zuul/builds/ec9043456e244bf38728792be429bfda/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-25 01:41:39.967307 | 2026-03-25 01:41:39.967411 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-25 01:41:41.281098 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-25 01:41:41.283216 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/ec9043456e244bf38728792be429bfda/work/ec9043456e244bf38728792be429bfda_id_rsa 2026-03-25 01:41:41.283350 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/ec9043456e244bf38728792be429bfda/work/ec9043456e244bf38728792be429bfda_id_rsa.pub 2026-03-25 01:41:41.283429 | orchestrator -> localhost | The key fingerprint is: 2026-03-25 01:41:41.283498 | orchestrator -> localhost | SHA256:9omDrS2WxPd/KfMPh8drpEwkxrZ5ZRMIrSzyC/eXPyM zuul-build-sshkey 2026-03-25 01:41:41.283563 | orchestrator -> localhost | The key's randomart image is: 2026-03-25 01:41:41.283656 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-25 01:41:41.283810 | orchestrator -> localhost | | .o . | 2026-03-25 01:41:41.283871 | orchestrator -> localhost | | o . | 2026-03-25 01:41:41.283925 | orchestrator -> localhost | | o . .| 2026-03-25 01:41:41.283976 | orchestrator -> localhost | | . . B . + | 2026-03-25 01:41:41.284030 | orchestrator -> localhost | | . S + = o .| 2026-03-25 01:41:41.284087 | orchestrator -> localhost | | o=.= + o + | 2026-03-25 01:41:41.284142 | orchestrator -> localhost | | ..o*.= + *.+| 2026-03-25 01:41:41.284193 | orchestrator -> localhost | | +o o..oEoB.| 2026-03-25 01:41:41.284248 | orchestrator -> localhost | | .... .o==++| 2026-03-25 01:41:41.284302 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-25 01:41:41.284446 | orchestrator -> localhost | ok: Runtime: 0:00:00.790235 2026-03-25 01:41:41.299585 | 2026-03-25 01:41:41.299958 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-25 01:41:41.336522 | orchestrator | ok 2026-03-25 01:41:41.350183 | orchestrator | included: /var/lib/zuul/builds/ec9043456e244bf38728792be429bfda/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-25 01:41:41.359582 | 2026-03-25 01:41:41.359705 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-25 01:41:41.383962 | orchestrator | skipping: Conditional result was False 2026-03-25 01:41:41.394939 | 2026-03-25 01:41:41.395066 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-25 01:41:42.023464 | orchestrator | changed 2026-03-25 01:41:42.033131 | 2026-03-25 01:41:42.033266 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-25 01:41:42.367513 | orchestrator | ok 2026-03-25 01:41:42.376973 | 2026-03-25 01:41:42.377121 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-25 01:41:42.834474 | orchestrator | ok 2026-03-25 01:41:42.843705 | 2026-03-25 01:41:42.843853 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-25 01:41:43.299090 | orchestrator | ok 2026-03-25 01:41:43.308207 | 2026-03-25 01:41:43.308356 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-25 01:41:43.334626 | orchestrator | skipping: Conditional result was False 2026-03-25 01:41:43.344479 | 2026-03-25 01:41:43.344637 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-25 01:41:43.847071 | orchestrator -> localhost | changed 2026-03-25 01:41:43.873869 | 2026-03-25 01:41:43.874031 | TASK [add-build-sshkey : Add back temp key] 2026-03-25 01:41:44.242290 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/ec9043456e244bf38728792be429bfda/work/ec9043456e244bf38728792be429bfda_id_rsa (zuul-build-sshkey) 2026-03-25 01:41:44.242548 | orchestrator -> localhost | ok: Runtime: 0:00:00.018674 2026-03-25 01:41:44.250435 | 2026-03-25 01:41:44.250545 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-25 01:41:44.701111 | orchestrator | ok 2026-03-25 01:41:44.715068 | 2026-03-25 01:41:44.715276 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-25 01:41:44.741764 | orchestrator | skipping: Conditional result was False 2026-03-25 01:41:44.799319 | 2026-03-25 01:41:44.799448 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-25 01:41:45.231832 | orchestrator | ok 2026-03-25 01:41:45.246290 | 2026-03-25 01:41:45.246432 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-25 01:41:45.293510 | orchestrator | ok 2026-03-25 01:41:45.303851 | 2026-03-25 01:41:45.303980 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-25 01:41:45.622146 | orchestrator -> localhost | ok 2026-03-25 01:41:45.630101 | 2026-03-25 01:41:45.630214 | TASK [validate-host : Collect information about the host] 2026-03-25 01:41:46.907301 | orchestrator | ok 2026-03-25 01:41:46.921882 | 2026-03-25 01:41:46.922015 | TASK [validate-host : Sanitize hostname] 2026-03-25 01:41:46.986490 | orchestrator | ok 2026-03-25 01:41:46.995241 | 2026-03-25 01:41:46.995390 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-25 01:41:47.592659 | orchestrator -> localhost | changed 2026-03-25 01:41:47.605623 | 2026-03-25 01:41:47.605833 | TASK [validate-host : Collect information about zuul worker] 2026-03-25 01:41:48.067945 | orchestrator | ok 2026-03-25 01:41:48.076023 | 2026-03-25 01:41:48.076162 | TASK [validate-host : Write out all zuul information for each host] 2026-03-25 01:41:48.626518 | orchestrator -> localhost | changed 2026-03-25 01:41:48.638071 | 2026-03-25 01:41:48.638188 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-25 01:41:48.946506 | orchestrator | ok 2026-03-25 01:41:48.954969 | 2026-03-25 01:41:48.955104 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-25 01:42:15.422745 | orchestrator | changed: 2026-03-25 01:42:15.423065 | orchestrator | .d..t...... src/ 2026-03-25 01:42:15.423119 | orchestrator | .d..t...... src/github.com/ 2026-03-25 01:42:15.423156 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-25 01:42:15.423187 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-25 01:42:15.423217 | orchestrator | RedHat.yml 2026-03-25 01:42:15.440669 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-25 01:42:15.440699 | orchestrator | RedHat.yml 2026-03-25 01:42:15.440753 | orchestrator | = 2.2.0"... 2026-03-25 01:42:25.055473 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-25 01:42:25.071693 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-25 01:42:25.572590 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-25 01:42:26.223826 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-25 01:42:26.767087 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-25 01:42:27.728298 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-25 01:42:28.133464 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-25 01:42:28.880928 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-25 01:42:28.881017 | orchestrator | 2026-03-25 01:42:28.881024 | orchestrator | Providers are signed by their developers. 2026-03-25 01:42:28.881030 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-25 01:42:28.881042 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-25 01:42:28.881079 | orchestrator | 2026-03-25 01:42:28.881085 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-25 01:42:28.881090 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-25 01:42:28.881104 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-25 01:42:28.881115 | orchestrator | you run "tofu init" in the future. 2026-03-25 01:42:28.881538 | orchestrator | 2026-03-25 01:42:28.881594 | orchestrator | OpenTofu has been successfully initialized! 2026-03-25 01:42:28.881626 | orchestrator | 2026-03-25 01:42:28.881634 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-25 01:42:28.881641 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-25 01:42:28.881647 | orchestrator | should now work. 2026-03-25 01:42:28.881653 | orchestrator | 2026-03-25 01:42:28.881659 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-25 01:42:28.881665 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-25 01:42:28.881681 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-25 01:42:29.804236 | orchestrator | Created and switched to workspace "ci"! 2026-03-25 01:42:29.804416 | orchestrator | 2026-03-25 01:42:29.804446 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-25 01:42:29.804461 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-25 01:42:29.804473 | orchestrator | for this configuration. 2026-03-25 01:42:29.957221 | orchestrator | ci.auto.tfvars 2026-03-25 01:42:30.860652 | orchestrator | default_custom.tf 2026-03-25 01:42:33.013930 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-25 01:42:33.565469 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-25 01:42:33.807537 | orchestrator | 2026-03-25 01:42:33.807614 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-25 01:42:33.807627 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-25 01:42:33.807635 | orchestrator | + create 2026-03-25 01:42:33.807652 | orchestrator | <= read (data resources) 2026-03-25 01:42:33.807660 | orchestrator | 2026-03-25 01:42:33.807667 | orchestrator | OpenTofu will perform the following actions: 2026-03-25 01:42:33.807675 | orchestrator | 2026-03-25 01:42:33.807682 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-25 01:42:33.807689 | orchestrator | # (config refers to values not yet known) 2026-03-25 01:42:33.807696 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-25 01:42:33.807703 | orchestrator | + checksum = (known after apply) 2026-03-25 01:42:33.807710 | orchestrator | + created_at = (known after apply) 2026-03-25 01:42:33.807717 | orchestrator | + file = (known after apply) 2026-03-25 01:42:33.807724 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.807754 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.807761 | orchestrator | + min_disk_gb = (known after apply) 2026-03-25 01:42:33.807768 | orchestrator | + min_ram_mb = (known after apply) 2026-03-25 01:42:33.807775 | orchestrator | + most_recent = true 2026-03-25 01:42:33.807782 | orchestrator | + name = (known after apply) 2026-03-25 01:42:33.807789 | orchestrator | + protected = (known after apply) 2026-03-25 01:42:33.807796 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.807805 | orchestrator | + schema = (known after apply) 2026-03-25 01:42:33.807812 | orchestrator | + size_bytes = (known after apply) 2026-03-25 01:42:33.807819 | orchestrator | + tags = (known after apply) 2026-03-25 01:42:33.807825 | orchestrator | + updated_at = (known after apply) 2026-03-25 01:42:33.807832 | orchestrator | } 2026-03-25 01:42:33.807842 | orchestrator | 2026-03-25 01:42:33.807849 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-25 01:42:33.807857 | orchestrator | # (config refers to values not yet known) 2026-03-25 01:42:33.807864 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-25 01:42:33.807870 | orchestrator | + checksum = (known after apply) 2026-03-25 01:42:33.807896 | orchestrator | + created_at = (known after apply) 2026-03-25 01:42:33.807904 | orchestrator | + file = (known after apply) 2026-03-25 01:42:33.807910 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.807917 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.807924 | orchestrator | + min_disk_gb = (known after apply) 2026-03-25 01:42:33.807931 | orchestrator | + min_ram_mb = (known after apply) 2026-03-25 01:42:33.807937 | orchestrator | + most_recent = true 2026-03-25 01:42:33.807944 | orchestrator | + name = (known after apply) 2026-03-25 01:42:33.807951 | orchestrator | + protected = (known after apply) 2026-03-25 01:42:33.808049 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.808058 | orchestrator | + schema = (known after apply) 2026-03-25 01:42:33.808064 | orchestrator | + size_bytes = (known after apply) 2026-03-25 01:42:33.808071 | orchestrator | + tags = (known after apply) 2026-03-25 01:42:33.808078 | orchestrator | + updated_at = (known after apply) 2026-03-25 01:42:33.808085 | orchestrator | } 2026-03-25 01:42:33.808092 | orchestrator | 2026-03-25 01:42:33.808099 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-25 01:42:33.808106 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-25 01:42:33.808113 | orchestrator | + content = (known after apply) 2026-03-25 01:42:33.808120 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-25 01:42:33.808127 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-25 01:42:33.808133 | orchestrator | + content_md5 = (known after apply) 2026-03-25 01:42:33.808140 | orchestrator | + content_sha1 = (known after apply) 2026-03-25 01:42:33.808147 | orchestrator | + content_sha256 = (known after apply) 2026-03-25 01:42:33.808154 | orchestrator | + content_sha512 = (known after apply) 2026-03-25 01:42:33.808160 | orchestrator | + directory_permission = "0777" 2026-03-25 01:42:33.808167 | orchestrator | + file_permission = "0644" 2026-03-25 01:42:33.808174 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-25 01:42:33.808181 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.808187 | orchestrator | } 2026-03-25 01:42:33.808194 | orchestrator | 2026-03-25 01:42:33.808201 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-25 01:42:33.808208 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-25 01:42:33.808215 | orchestrator | + content = (known after apply) 2026-03-25 01:42:33.808221 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-25 01:42:33.808228 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-25 01:42:33.808235 | orchestrator | + content_md5 = (known after apply) 2026-03-25 01:42:33.808242 | orchestrator | + content_sha1 = (known after apply) 2026-03-25 01:42:33.808248 | orchestrator | + content_sha256 = (known after apply) 2026-03-25 01:42:33.808255 | orchestrator | + content_sha512 = (known after apply) 2026-03-25 01:42:33.808262 | orchestrator | + directory_permission = "0777" 2026-03-25 01:42:33.808268 | orchestrator | + file_permission = "0644" 2026-03-25 01:42:33.808282 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-25 01:42:33.808289 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.808296 | orchestrator | } 2026-03-25 01:42:33.808302 | orchestrator | 2026-03-25 01:42:33.808316 | orchestrator | # local_file.inventory will be created 2026-03-25 01:42:33.808323 | orchestrator | + resource "local_file" "inventory" { 2026-03-25 01:42:33.808330 | orchestrator | + content = (known after apply) 2026-03-25 01:42:33.808337 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-25 01:42:33.808344 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-25 01:42:33.808350 | orchestrator | + content_md5 = (known after apply) 2026-03-25 01:42:33.808357 | orchestrator | + content_sha1 = (known after apply) 2026-03-25 01:42:33.808364 | orchestrator | + content_sha256 = (known after apply) 2026-03-25 01:42:33.808371 | orchestrator | + content_sha512 = (known after apply) 2026-03-25 01:42:33.808378 | orchestrator | + directory_permission = "0777" 2026-03-25 01:42:33.808384 | orchestrator | + file_permission = "0644" 2026-03-25 01:42:33.808391 | orchestrator | + filename = "inventory.ci" 2026-03-25 01:42:33.808398 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.808404 | orchestrator | } 2026-03-25 01:42:33.808415 | orchestrator | 2026-03-25 01:42:33.808422 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-25 01:42:33.808429 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-25 01:42:33.808436 | orchestrator | + content = (sensitive value) 2026-03-25 01:42:33.808443 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-25 01:42:33.808449 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-25 01:42:33.808456 | orchestrator | + content_md5 = (known after apply) 2026-03-25 01:42:33.808463 | orchestrator | + content_sha1 = (known after apply) 2026-03-25 01:42:33.808470 | orchestrator | + content_sha256 = (known after apply) 2026-03-25 01:42:33.808476 | orchestrator | + content_sha512 = (known after apply) 2026-03-25 01:42:33.808483 | orchestrator | + directory_permission = "0700" 2026-03-25 01:42:33.808490 | orchestrator | + file_permission = "0600" 2026-03-25 01:42:33.808496 | orchestrator | + filename = ".id_rsa.ci" 2026-03-25 01:42:33.808503 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.808510 | orchestrator | } 2026-03-25 01:42:33.808517 | orchestrator | 2026-03-25 01:42:33.808523 | orchestrator | # null_resource.node_semaphore will be created 2026-03-25 01:42:33.808530 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-25 01:42:33.808537 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.808544 | orchestrator | } 2026-03-25 01:42:33.808550 | orchestrator | 2026-03-25 01:42:33.808557 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-25 01:42:33.808564 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-25 01:42:33.808571 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.808578 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.808585 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.808591 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.808598 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.808605 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-25 01:42:33.808612 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.808618 | orchestrator | + size = 80 2026-03-25 01:42:33.808625 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.808632 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.808639 | orchestrator | } 2026-03-25 01:42:33.808645 | orchestrator | 2026-03-25 01:42:33.808652 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-25 01:42:33.808659 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-25 01:42:33.808666 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.808673 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.808679 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.808695 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.808702 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.808709 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-25 01:42:33.808715 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.808722 | orchestrator | + size = 80 2026-03-25 01:42:33.808729 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.808736 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.808742 | orchestrator | } 2026-03-25 01:42:33.808749 | orchestrator | 2026-03-25 01:42:33.808756 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-25 01:42:33.808762 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-25 01:42:33.808769 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.808776 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.808782 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.808789 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.808796 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.808802 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-25 01:42:33.808809 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.808816 | orchestrator | + size = 80 2026-03-25 01:42:33.808823 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.808829 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.808836 | orchestrator | } 2026-03-25 01:42:33.808843 | orchestrator | 2026-03-25 01:42:33.808850 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-25 01:42:33.808856 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-25 01:42:33.808863 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.808870 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.808900 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.808908 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.808915 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.808921 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-25 01:42:33.808928 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.808935 | orchestrator | + size = 80 2026-03-25 01:42:33.808941 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.808948 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.808954 | orchestrator | } 2026-03-25 01:42:33.808961 | orchestrator | 2026-03-25 01:42:33.808968 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-25 01:42:33.808974 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-25 01:42:33.808981 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.808988 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.808994 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809001 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.809007 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809018 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-25 01:42:33.809025 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809031 | orchestrator | + size = 80 2026-03-25 01:42:33.809038 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809045 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809051 | orchestrator | } 2026-03-25 01:42:33.809058 | orchestrator | 2026-03-25 01:42:33.809065 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-25 01:42:33.809071 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-25 01:42:33.809078 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.809162 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.809175 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809188 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.809195 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809202 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-25 01:42:33.809209 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809216 | orchestrator | + size = 80 2026-03-25 01:42:33.809222 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809229 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809236 | orchestrator | } 2026-03-25 01:42:33.809242 | orchestrator | 2026-03-25 01:42:33.809249 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-25 01:42:33.809256 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-25 01:42:33.809263 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.809270 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.809277 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809283 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.809290 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809297 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-25 01:42:33.809303 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809310 | orchestrator | + size = 80 2026-03-25 01:42:33.809317 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809324 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809330 | orchestrator | } 2026-03-25 01:42:33.809337 | orchestrator | 2026-03-25 01:42:33.809344 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-25 01:42:33.809351 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-25 01:42:33.809358 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.809365 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.809371 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809378 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809385 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-25 01:42:33.809392 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809399 | orchestrator | + size = 20 2026-03-25 01:42:33.809405 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809412 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809419 | orchestrator | } 2026-03-25 01:42:33.809425 | orchestrator | 2026-03-25 01:42:33.809432 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-25 01:42:33.809439 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-25 01:42:33.809446 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.809452 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.809459 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809466 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809473 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-25 01:42:33.809479 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809486 | orchestrator | + size = 20 2026-03-25 01:42:33.809493 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809500 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809506 | orchestrator | } 2026-03-25 01:42:33.809513 | orchestrator | 2026-03-25 01:42:33.809520 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-25 01:42:33.809527 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-25 01:42:33.809534 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.809540 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.809547 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809554 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809560 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-25 01:42:33.809567 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809578 | orchestrator | + size = 20 2026-03-25 01:42:33.809585 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809591 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809598 | orchestrator | } 2026-03-25 01:42:33.809605 | orchestrator | 2026-03-25 01:42:33.809612 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-25 01:42:33.809619 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-25 01:42:33.809625 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.809632 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.809639 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809645 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809652 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-25 01:42:33.809659 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809666 | orchestrator | + size = 20 2026-03-25 01:42:33.809672 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809679 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809686 | orchestrator | } 2026-03-25 01:42:33.809693 | orchestrator | 2026-03-25 01:42:33.809699 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-25 01:42:33.809706 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-25 01:42:33.809713 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.809720 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.809726 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809733 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809740 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-25 01:42:33.809747 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809757 | orchestrator | + size = 20 2026-03-25 01:42:33.809764 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809771 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809778 | orchestrator | } 2026-03-25 01:42:33.809786 | orchestrator | 2026-03-25 01:42:33.809796 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-25 01:42:33.809806 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-25 01:42:33.809816 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.809823 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.809829 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809836 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809843 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-25 01:42:33.809853 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809860 | orchestrator | + size = 20 2026-03-25 01:42:33.809867 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809874 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809900 | orchestrator | } 2026-03-25 01:42:33.809907 | orchestrator | 2026-03-25 01:42:33.809914 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-25 01:42:33.809921 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-25 01:42:33.809927 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.809934 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.809941 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.809947 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.809954 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-25 01:42:33.809961 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.809967 | orchestrator | + size = 20 2026-03-25 01:42:33.809974 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.809981 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.809987 | orchestrator | } 2026-03-25 01:42:33.809994 | orchestrator | 2026-03-25 01:42:33.810001 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-25 01:42:33.810008 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-25 01:42:33.810042 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.810051 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.810057 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.810064 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.810070 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-25 01:42:33.810077 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.810084 | orchestrator | + size = 20 2026-03-25 01:42:33.810091 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.810098 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.810104 | orchestrator | } 2026-03-25 01:42:33.810111 | orchestrator | 2026-03-25 01:42:33.810118 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-25 01:42:33.810125 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-25 01:42:33.810131 | orchestrator | + attachment = (known after apply) 2026-03-25 01:42:33.810138 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.810145 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.810151 | orchestrator | + metadata = (known after apply) 2026-03-25 01:42:33.810158 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-25 01:42:33.810165 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.810172 | orchestrator | + size = 20 2026-03-25 01:42:33.810178 | orchestrator | + volume_retype_policy = "never" 2026-03-25 01:42:33.810185 | orchestrator | + volume_type = "ssd" 2026-03-25 01:42:33.810192 | orchestrator | } 2026-03-25 01:42:33.810270 | orchestrator | 2026-03-25 01:42:33.810278 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-25 01:42:33.810285 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-25 01:42:33.810292 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-25 01:42:33.810299 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-25 01:42:33.810305 | orchestrator | + all_metadata = (known after apply) 2026-03-25 01:42:33.810312 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.810319 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.810325 | orchestrator | + config_drive = true 2026-03-25 01:42:33.810332 | orchestrator | + created = (known after apply) 2026-03-25 01:42:33.810339 | orchestrator | + flavor_id = (known after apply) 2026-03-25 01:42:33.810346 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-25 01:42:33.810352 | orchestrator | + force_delete = false 2026-03-25 01:42:33.810359 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-25 01:42:33.810365 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.810372 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.810379 | orchestrator | + image_name = (known after apply) 2026-03-25 01:42:33.810385 | orchestrator | + key_pair = "testbed" 2026-03-25 01:42:33.810392 | orchestrator | + name = "testbed-manager" 2026-03-25 01:42:33.810399 | orchestrator | + power_state = "active" 2026-03-25 01:42:33.810405 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.810412 | orchestrator | + security_groups = (known after apply) 2026-03-25 01:42:33.810418 | orchestrator | + stop_before_destroy = false 2026-03-25 01:42:33.810425 | orchestrator | + updated = (known after apply) 2026-03-25 01:42:33.810432 | orchestrator | + user_data = (sensitive value) 2026-03-25 01:42:33.810438 | orchestrator | 2026-03-25 01:42:33.810446 | orchestrator | + block_device { 2026-03-25 01:42:33.810452 | orchestrator | + boot_index = 0 2026-03-25 01:42:33.810459 | orchestrator | + delete_on_termination = false 2026-03-25 01:42:33.810470 | orchestrator | + destination_type = "volume" 2026-03-25 01:42:33.810477 | orchestrator | + multiattach = false 2026-03-25 01:42:33.810483 | orchestrator | + source_type = "volume" 2026-03-25 01:42:33.810490 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.810502 | orchestrator | } 2026-03-25 01:42:33.810509 | orchestrator | 2026-03-25 01:42:33.810516 | orchestrator | + network { 2026-03-25 01:42:33.810523 | orchestrator | + access_network = false 2026-03-25 01:42:33.810530 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-25 01:42:33.810537 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-25 01:42:33.810543 | orchestrator | + mac = (known after apply) 2026-03-25 01:42:33.810550 | orchestrator | + name = (known after apply) 2026-03-25 01:42:33.810557 | orchestrator | + port = (known after apply) 2026-03-25 01:42:33.810563 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.810570 | orchestrator | } 2026-03-25 01:42:33.810577 | orchestrator | } 2026-03-25 01:42:33.810584 | orchestrator | 2026-03-25 01:42:33.810590 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-25 01:42:33.810597 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-25 01:42:33.810604 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-25 01:42:33.810610 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-25 01:42:33.810617 | orchestrator | + all_metadata = (known after apply) 2026-03-25 01:42:33.810624 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.810630 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.810637 | orchestrator | + config_drive = true 2026-03-25 01:42:33.810644 | orchestrator | + created = (known after apply) 2026-03-25 01:42:33.810655 | orchestrator | + flavor_id = (known after apply) 2026-03-25 01:42:33.810662 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-25 01:42:33.810669 | orchestrator | + force_delete = false 2026-03-25 01:42:33.810676 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-25 01:42:33.810683 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.810689 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.810696 | orchestrator | + image_name = (known after apply) 2026-03-25 01:42:33.810703 | orchestrator | + key_pair = "testbed" 2026-03-25 01:42:33.810709 | orchestrator | + name = "testbed-node-0" 2026-03-25 01:42:33.810716 | orchestrator | + power_state = "active" 2026-03-25 01:42:33.810723 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.810730 | orchestrator | + security_groups = (known after apply) 2026-03-25 01:42:33.810736 | orchestrator | + stop_before_destroy = false 2026-03-25 01:42:33.810743 | orchestrator | + updated = (known after apply) 2026-03-25 01:42:33.810750 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-25 01:42:33.810757 | orchestrator | 2026-03-25 01:42:33.810763 | orchestrator | + block_device { 2026-03-25 01:42:33.810770 | orchestrator | + boot_index = 0 2026-03-25 01:42:33.810777 | orchestrator | + delete_on_termination = false 2026-03-25 01:42:33.810783 | orchestrator | + destination_type = "volume" 2026-03-25 01:42:33.810790 | orchestrator | + multiattach = false 2026-03-25 01:42:33.810797 | orchestrator | + source_type = "volume" 2026-03-25 01:42:33.810803 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.810810 | orchestrator | } 2026-03-25 01:42:33.810817 | orchestrator | 2026-03-25 01:42:33.810824 | orchestrator | + network { 2026-03-25 01:42:33.810830 | orchestrator | + access_network = false 2026-03-25 01:42:33.810837 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-25 01:42:33.810844 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-25 01:42:33.810851 | orchestrator | + mac = (known after apply) 2026-03-25 01:42:33.810857 | orchestrator | + name = (known after apply) 2026-03-25 01:42:33.810864 | orchestrator | + port = (known after apply) 2026-03-25 01:42:33.810871 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.810918 | orchestrator | } 2026-03-25 01:42:33.810926 | orchestrator | } 2026-03-25 01:42:33.810933 | orchestrator | 2026-03-25 01:42:33.810940 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-25 01:42:33.810947 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-25 01:42:33.810953 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-25 01:42:33.810968 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-25 01:42:33.810975 | orchestrator | + all_metadata = (known after apply) 2026-03-25 01:42:33.810981 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.810988 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.810994 | orchestrator | + config_drive = true 2026-03-25 01:42:33.811001 | orchestrator | + created = (known after apply) 2026-03-25 01:42:33.811008 | orchestrator | + flavor_id = (known after apply) 2026-03-25 01:42:33.811014 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-25 01:42:33.811021 | orchestrator | + force_delete = false 2026-03-25 01:42:33.811028 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-25 01:42:33.811034 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.811041 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.811048 | orchestrator | + image_name = (known after apply) 2026-03-25 01:42:33.811054 | orchestrator | + key_pair = "testbed" 2026-03-25 01:42:33.811061 | orchestrator | + name = "testbed-node-1" 2026-03-25 01:42:33.811068 | orchestrator | + power_state = "active" 2026-03-25 01:42:33.811074 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.811081 | orchestrator | + security_groups = (known after apply) 2026-03-25 01:42:33.811088 | orchestrator | + stop_before_destroy = false 2026-03-25 01:42:33.811094 | orchestrator | + updated = (known after apply) 2026-03-25 01:42:33.811101 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-25 01:42:33.811108 | orchestrator | 2026-03-25 01:42:33.811115 | orchestrator | + block_device { 2026-03-25 01:42:33.811121 | orchestrator | + boot_index = 0 2026-03-25 01:42:33.811128 | orchestrator | + delete_on_termination = false 2026-03-25 01:42:33.811134 | orchestrator | + destination_type = "volume" 2026-03-25 01:42:33.811141 | orchestrator | + multiattach = false 2026-03-25 01:42:33.811148 | orchestrator | + source_type = "volume" 2026-03-25 01:42:33.811154 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.811161 | orchestrator | } 2026-03-25 01:42:33.811168 | orchestrator | 2026-03-25 01:42:33.811175 | orchestrator | + network { 2026-03-25 01:42:33.811181 | orchestrator | + access_network = false 2026-03-25 01:42:33.811188 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-25 01:42:33.811195 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-25 01:42:33.811201 | orchestrator | + mac = (known after apply) 2026-03-25 01:42:33.811208 | orchestrator | + name = (known after apply) 2026-03-25 01:42:33.811214 | orchestrator | + port = (known after apply) 2026-03-25 01:42:33.811221 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.811227 | orchestrator | } 2026-03-25 01:42:33.811234 | orchestrator | } 2026-03-25 01:42:33.811241 | orchestrator | 2026-03-25 01:42:33.811247 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-25 01:42:33.811254 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-25 01:42:33.811261 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-25 01:42:33.811267 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-25 01:42:33.811274 | orchestrator | + all_metadata = (known after apply) 2026-03-25 01:42:33.811281 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.811292 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.811299 | orchestrator | + config_drive = true 2026-03-25 01:42:33.811372 | orchestrator | + created = (known after apply) 2026-03-25 01:42:33.811381 | orchestrator | + flavor_id = (known after apply) 2026-03-25 01:42:33.811388 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-25 01:42:33.811395 | orchestrator | + force_delete = false 2026-03-25 01:42:33.811401 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-25 01:42:33.811408 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.811415 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.811427 | orchestrator | + image_name = (known after apply) 2026-03-25 01:42:33.811434 | orchestrator | + key_pair = "testbed" 2026-03-25 01:42:33.811441 | orchestrator | + name = "testbed-node-2" 2026-03-25 01:42:33.811447 | orchestrator | + power_state = "active" 2026-03-25 01:42:33.811457 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.811464 | orchestrator | + security_groups = (known after apply) 2026-03-25 01:42:33.811470 | orchestrator | + stop_before_destroy = false 2026-03-25 01:42:33.811476 | orchestrator | + updated = (known after apply) 2026-03-25 01:42:33.811483 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-25 01:42:33.811489 | orchestrator | 2026-03-25 01:42:33.811495 | orchestrator | + block_device { 2026-03-25 01:42:33.811501 | orchestrator | + boot_index = 0 2026-03-25 01:42:33.811507 | orchestrator | + delete_on_termination = false 2026-03-25 01:42:33.811514 | orchestrator | + destination_type = "volume" 2026-03-25 01:42:33.811520 | orchestrator | + multiattach = false 2026-03-25 01:42:33.811526 | orchestrator | + source_type = "volume" 2026-03-25 01:42:33.811532 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.811539 | orchestrator | } 2026-03-25 01:42:33.811545 | orchestrator | 2026-03-25 01:42:33.811551 | orchestrator | + network { 2026-03-25 01:42:33.811557 | orchestrator | + access_network = false 2026-03-25 01:42:33.811564 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-25 01:42:33.811570 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-25 01:42:33.811576 | orchestrator | + mac = (known after apply) 2026-03-25 01:42:33.811582 | orchestrator | + name = (known after apply) 2026-03-25 01:42:33.811589 | orchestrator | + port = (known after apply) 2026-03-25 01:42:33.811595 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.811601 | orchestrator | } 2026-03-25 01:42:33.811607 | orchestrator | } 2026-03-25 01:42:33.811613 | orchestrator | 2026-03-25 01:42:33.811619 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-25 01:42:33.811626 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-25 01:42:33.811632 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-25 01:42:33.811638 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-25 01:42:33.811644 | orchestrator | + all_metadata = (known after apply) 2026-03-25 01:42:33.811651 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.811657 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.811663 | orchestrator | + config_drive = true 2026-03-25 01:42:33.811670 | orchestrator | + created = (known after apply) 2026-03-25 01:42:33.811676 | orchestrator | + flavor_id = (known after apply) 2026-03-25 01:42:33.811682 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-25 01:42:33.811688 | orchestrator | + force_delete = false 2026-03-25 01:42:33.811694 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-25 01:42:33.811700 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.811707 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.811713 | orchestrator | + image_name = (known after apply) 2026-03-25 01:42:33.811719 | orchestrator | + key_pair = "testbed" 2026-03-25 01:42:33.811725 | orchestrator | + name = "testbed-node-3" 2026-03-25 01:42:33.811731 | orchestrator | + power_state = "active" 2026-03-25 01:42:33.811738 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.811744 | orchestrator | + security_groups = (known after apply) 2026-03-25 01:42:33.811750 | orchestrator | + stop_before_destroy = false 2026-03-25 01:42:33.811756 | orchestrator | + updated = (known after apply) 2026-03-25 01:42:33.811763 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-25 01:42:33.811769 | orchestrator | 2026-03-25 01:42:33.811775 | orchestrator | + block_device { 2026-03-25 01:42:33.811785 | orchestrator | + boot_index = 0 2026-03-25 01:42:33.811791 | orchestrator | + delete_on_termination = false 2026-03-25 01:42:33.811798 | orchestrator | + destination_type = "volume" 2026-03-25 01:42:33.811809 | orchestrator | + multiattach = false 2026-03-25 01:42:33.811816 | orchestrator | + source_type = "volume" 2026-03-25 01:42:33.811822 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.811828 | orchestrator | } 2026-03-25 01:42:33.811834 | orchestrator | 2026-03-25 01:42:33.811841 | orchestrator | + network { 2026-03-25 01:42:33.811847 | orchestrator | + access_network = false 2026-03-25 01:42:33.811853 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-25 01:42:33.811859 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-25 01:42:33.811865 | orchestrator | + mac = (known after apply) 2026-03-25 01:42:33.811871 | orchestrator | + name = (known after apply) 2026-03-25 01:42:33.811892 | orchestrator | + port = (known after apply) 2026-03-25 01:42:33.811899 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.811905 | orchestrator | } 2026-03-25 01:42:33.811911 | orchestrator | } 2026-03-25 01:42:33.811917 | orchestrator | 2026-03-25 01:42:33.811923 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-25 01:42:33.811930 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-25 01:42:33.811936 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-25 01:42:33.811942 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-25 01:42:33.811948 | orchestrator | + all_metadata = (known after apply) 2026-03-25 01:42:33.811955 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.811961 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.811967 | orchestrator | + config_drive = true 2026-03-25 01:42:33.811973 | orchestrator | + created = (known after apply) 2026-03-25 01:42:33.811979 | orchestrator | + flavor_id = (known after apply) 2026-03-25 01:42:33.811986 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-25 01:42:33.811992 | orchestrator | + force_delete = false 2026-03-25 01:42:33.811998 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-25 01:42:33.812004 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.812010 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.812016 | orchestrator | + image_name = (known after apply) 2026-03-25 01:42:33.812023 | orchestrator | + key_pair = "testbed" 2026-03-25 01:42:33.812029 | orchestrator | + name = "testbed-node-4" 2026-03-25 01:42:33.812035 | orchestrator | + power_state = "active" 2026-03-25 01:42:33.812041 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.812047 | orchestrator | + security_groups = (known after apply) 2026-03-25 01:42:33.812053 | orchestrator | + stop_before_destroy = false 2026-03-25 01:42:33.812059 | orchestrator | + updated = (known after apply) 2026-03-25 01:42:33.812066 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-25 01:42:33.812072 | orchestrator | 2026-03-25 01:42:33.812078 | orchestrator | + block_device { 2026-03-25 01:42:33.812084 | orchestrator | + boot_index = 0 2026-03-25 01:42:33.812091 | orchestrator | + delete_on_termination = false 2026-03-25 01:42:33.812097 | orchestrator | + destination_type = "volume" 2026-03-25 01:42:33.812103 | orchestrator | + multiattach = false 2026-03-25 01:42:33.812112 | orchestrator | + source_type = "volume" 2026-03-25 01:42:33.812119 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.812125 | orchestrator | } 2026-03-25 01:42:33.812131 | orchestrator | 2026-03-25 01:42:33.812137 | orchestrator | + network { 2026-03-25 01:42:33.812144 | orchestrator | + access_network = false 2026-03-25 01:42:33.812150 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-25 01:42:33.812156 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-25 01:42:33.812162 | orchestrator | + mac = (known after apply) 2026-03-25 01:42:33.812168 | orchestrator | + name = (known after apply) 2026-03-25 01:42:33.812174 | orchestrator | + port = (known after apply) 2026-03-25 01:42:33.812181 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.812187 | orchestrator | } 2026-03-25 01:42:33.812193 | orchestrator | } 2026-03-25 01:42:33.812204 | orchestrator | 2026-03-25 01:42:33.812210 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-25 01:42:33.812216 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-25 01:42:33.812222 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-25 01:42:33.812229 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-25 01:42:33.812235 | orchestrator | + all_metadata = (known after apply) 2026-03-25 01:42:33.812241 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.812247 | orchestrator | + availability_zone = "nova" 2026-03-25 01:42:33.812253 | orchestrator | + config_drive = true 2026-03-25 01:42:33.812259 | orchestrator | + created = (known after apply) 2026-03-25 01:42:33.812265 | orchestrator | + flavor_id = (known after apply) 2026-03-25 01:42:33.812272 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-25 01:42:33.812278 | orchestrator | + force_delete = false 2026-03-25 01:42:33.812287 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-25 01:42:33.812294 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.812300 | orchestrator | + image_id = (known after apply) 2026-03-25 01:42:33.812306 | orchestrator | + image_name = (known after apply) 2026-03-25 01:42:33.812312 | orchestrator | + key_pair = "testbed" 2026-03-25 01:42:33.812318 | orchestrator | + name = "testbed-node-5" 2026-03-25 01:42:33.812324 | orchestrator | + power_state = "active" 2026-03-25 01:42:33.812331 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.812337 | orchestrator | + security_groups = (known after apply) 2026-03-25 01:42:33.812343 | orchestrator | + stop_before_destroy = false 2026-03-25 01:42:33.812349 | orchestrator | + updated = (known after apply) 2026-03-25 01:42:33.812355 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-25 01:42:33.812361 | orchestrator | 2026-03-25 01:42:33.812368 | orchestrator | + block_device { 2026-03-25 01:42:33.812374 | orchestrator | + boot_index = 0 2026-03-25 01:42:33.812380 | orchestrator | + delete_on_termination = false 2026-03-25 01:42:33.812386 | orchestrator | + destination_type = "volume" 2026-03-25 01:42:33.812392 | orchestrator | + multiattach = false 2026-03-25 01:42:33.812398 | orchestrator | + source_type = "volume" 2026-03-25 01:42:33.812405 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.812469 | orchestrator | } 2026-03-25 01:42:33.812476 | orchestrator | 2026-03-25 01:42:33.812483 | orchestrator | + network { 2026-03-25 01:42:33.812489 | orchestrator | + access_network = false 2026-03-25 01:42:33.812496 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-25 01:42:33.812502 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-25 01:42:33.812508 | orchestrator | + mac = (known after apply) 2026-03-25 01:42:33.812514 | orchestrator | + name = (known after apply) 2026-03-25 01:42:33.812521 | orchestrator | + port = (known after apply) 2026-03-25 01:42:33.812527 | orchestrator | + uuid = (known after apply) 2026-03-25 01:42:33.812533 | orchestrator | } 2026-03-25 01:42:33.812540 | orchestrator | } 2026-03-25 01:42:33.812546 | orchestrator | 2026-03-25 01:42:33.812552 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-25 01:42:33.812559 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-25 01:42:33.812565 | orchestrator | + fingerprint = (known after apply) 2026-03-25 01:42:33.812571 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.812578 | orchestrator | + name = "testbed" 2026-03-25 01:42:33.812584 | orchestrator | + private_key = (sensitive value) 2026-03-25 01:42:33.812590 | orchestrator | + public_key = (known after apply) 2026-03-25 01:42:33.812596 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.812603 | orchestrator | + user_id = (known after apply) 2026-03-25 01:42:33.812609 | orchestrator | } 2026-03-25 01:42:33.812615 | orchestrator | 2026-03-25 01:42:33.812622 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-25 01:42:33.812628 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-25 01:42:33.812643 | orchestrator | + device = (known after apply) 2026-03-25 01:42:33.812649 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.812655 | orchestrator | + instance_id = (known after apply) 2026-03-25 01:42:33.812662 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.812668 | orchestrator | + volume_id = (known after apply) 2026-03-25 01:42:33.812674 | orchestrator | } 2026-03-25 01:42:33.812680 | orchestrator | 2026-03-25 01:42:33.812687 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-25 01:42:33.812693 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-25 01:42:33.812699 | orchestrator | + device = (known after apply) 2026-03-25 01:42:33.812705 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.812712 | orchestrator | + instance_id = (known after apply) 2026-03-25 01:42:33.812718 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.812724 | orchestrator | + volume_id = (known after apply) 2026-03-25 01:42:33.812730 | orchestrator | } 2026-03-25 01:42:33.812737 | orchestrator | 2026-03-25 01:42:33.812743 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-25 01:42:33.812749 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-25 01:42:33.812756 | orchestrator | + device = (known after apply) 2026-03-25 01:42:33.812762 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.812768 | orchestrator | + instance_id = (known after apply) 2026-03-25 01:42:33.812774 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.812781 | orchestrator | + volume_id = (known after apply) 2026-03-25 01:42:33.812787 | orchestrator | } 2026-03-25 01:42:33.812793 | orchestrator | 2026-03-25 01:42:33.812800 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-25 01:42:33.812806 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-25 01:42:33.812812 | orchestrator | + device = (known after apply) 2026-03-25 01:42:33.812818 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.812831 | orchestrator | + instance_id = (known after apply) 2026-03-25 01:42:33.812838 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.812844 | orchestrator | + volume_id = (known after apply) 2026-03-25 01:42:33.812851 | orchestrator | } 2026-03-25 01:42:33.812857 | orchestrator | 2026-03-25 01:42:33.812864 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-25 01:42:33.812870 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-25 01:42:33.812888 | orchestrator | + device = (known after apply) 2026-03-25 01:42:33.812895 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.812901 | orchestrator | + instance_id = (known after apply) 2026-03-25 01:42:33.812911 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.812918 | orchestrator | + volume_id = (known after apply) 2026-03-25 01:42:33.812924 | orchestrator | } 2026-03-25 01:42:33.812930 | orchestrator | 2026-03-25 01:42:33.812936 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-25 01:42:33.812943 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-25 01:42:33.812949 | orchestrator | + device = (known after apply) 2026-03-25 01:42:33.812955 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.812961 | orchestrator | + instance_id = (known after apply) 2026-03-25 01:42:33.812967 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.812974 | orchestrator | + volume_id = (known after apply) 2026-03-25 01:42:33.812980 | orchestrator | } 2026-03-25 01:42:33.812986 | orchestrator | 2026-03-25 01:42:33.812993 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-25 01:42:33.812999 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-25 01:42:33.813005 | orchestrator | + device = (known after apply) 2026-03-25 01:42:33.813011 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.813020 | orchestrator | + instance_id = (known after apply) 2026-03-25 01:42:33.813030 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.813047 | orchestrator | + volume_id = (known after apply) 2026-03-25 01:42:33.813058 | orchestrator | } 2026-03-25 01:42:33.813066 | orchestrator | 2026-03-25 01:42:33.813076 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-25 01:42:33.813086 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-25 01:42:33.813096 | orchestrator | + device = (known after apply) 2026-03-25 01:42:33.813105 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.813116 | orchestrator | + instance_id = (known after apply) 2026-03-25 01:42:33.813126 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.813136 | orchestrator | + volume_id = (known after apply) 2026-03-25 01:42:33.813143 | orchestrator | } 2026-03-25 01:42:33.813149 | orchestrator | 2026-03-25 01:42:33.813155 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-25 01:42:33.813162 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-25 01:42:33.813168 | orchestrator | + device = (known after apply) 2026-03-25 01:42:33.813174 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.813180 | orchestrator | + instance_id = (known after apply) 2026-03-25 01:42:33.813186 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.813192 | orchestrator | + volume_id = (known after apply) 2026-03-25 01:42:33.813198 | orchestrator | } 2026-03-25 01:42:33.813205 | orchestrator | 2026-03-25 01:42:33.813211 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-25 01:42:33.813218 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-25 01:42:33.813224 | orchestrator | + fixed_ip = (known after apply) 2026-03-25 01:42:33.813231 | orchestrator | + floating_ip = (known after apply) 2026-03-25 01:42:33.813237 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.813243 | orchestrator | + port_id = (known after apply) 2026-03-25 01:42:33.813249 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.813255 | orchestrator | } 2026-03-25 01:42:33.813261 | orchestrator | 2026-03-25 01:42:33.813267 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-25 01:42:33.813274 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-25 01:42:33.813280 | orchestrator | + address = (known after apply) 2026-03-25 01:42:33.813286 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.813292 | orchestrator | + dns_domain = (known after apply) 2026-03-25 01:42:33.813298 | orchestrator | + dns_name = (known after apply) 2026-03-25 01:42:33.813304 | orchestrator | + fixed_ip = (known after apply) 2026-03-25 01:42:33.813310 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.813317 | orchestrator | + pool = "public" 2026-03-25 01:42:33.813323 | orchestrator | + port_id = (known after apply) 2026-03-25 01:42:33.813329 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.813335 | orchestrator | + subnet_id = (known after apply) 2026-03-25 01:42:33.813341 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.813347 | orchestrator | } 2026-03-25 01:42:33.813353 | orchestrator | 2026-03-25 01:42:33.813360 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-25 01:42:33.813366 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-25 01:42:33.813372 | orchestrator | + admin_state_up = (known after apply) 2026-03-25 01:42:33.813379 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.813385 | orchestrator | + availability_zone_hints = [ 2026-03-25 01:42:33.813391 | orchestrator | + "nova", 2026-03-25 01:42:33.813397 | orchestrator | ] 2026-03-25 01:42:33.813404 | orchestrator | + dns_domain = (known after apply) 2026-03-25 01:42:33.813410 | orchestrator | + external = (known after apply) 2026-03-25 01:42:33.813416 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.813422 | orchestrator | + mtu = (known after apply) 2026-03-25 01:42:33.813428 | orchestrator | + name = "net-testbed-management" 2026-03-25 01:42:33.813434 | orchestrator | + port_security_enabled = (known after apply) 2026-03-25 01:42:33.813446 | orchestrator | + qos_policy_id = (known after apply) 2026-03-25 01:42:33.813452 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.813458 | orchestrator | + shared = (known after apply) 2026-03-25 01:42:33.813465 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.813473 | orchestrator | + transparent_vlan = (known after apply) 2026-03-25 01:42:33.813483 | orchestrator | 2026-03-25 01:42:33.813491 | orchestrator | + segments (known after apply) 2026-03-25 01:42:33.813569 | orchestrator | } 2026-03-25 01:42:33.813578 | orchestrator | 2026-03-25 01:42:33.813584 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-25 01:42:33.813590 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-25 01:42:33.813604 | orchestrator | + admin_state_up = (known after apply) 2026-03-25 01:42:33.813611 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-25 01:42:33.813617 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-25 01:42:33.813628 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.813635 | orchestrator | + device_id = (known after apply) 2026-03-25 01:42:33.813641 | orchestrator | + device_owner = (known after apply) 2026-03-25 01:42:33.813647 | orchestrator | + dns_assignment = (known after apply) 2026-03-25 01:42:33.813653 | orchestrator | + dns_name = (known after apply) 2026-03-25 01:42:33.813659 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.813666 | orchestrator | + mac_address = (known after apply) 2026-03-25 01:42:33.813672 | orchestrator | + network_id = (known after apply) 2026-03-25 01:42:33.813678 | orchestrator | + port_security_enabled = (known after apply) 2026-03-25 01:42:33.813684 | orchestrator | + qos_policy_id = (known after apply) 2026-03-25 01:42:33.813690 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.813696 | orchestrator | + security_group_ids = (known after apply) 2026-03-25 01:42:33.813703 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.813709 | orchestrator | 2026-03-25 01:42:33.813715 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.813721 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-25 01:42:33.813728 | orchestrator | } 2026-03-25 01:42:33.813734 | orchestrator | 2026-03-25 01:42:33.813740 | orchestrator | + binding (known after apply) 2026-03-25 01:42:33.813746 | orchestrator | 2026-03-25 01:42:33.813753 | orchestrator | + fixed_ip { 2026-03-25 01:42:33.813759 | orchestrator | + ip_address = "192.168.16.5" 2026-03-25 01:42:33.813765 | orchestrator | + subnet_id = (known after apply) 2026-03-25 01:42:33.813771 | orchestrator | } 2026-03-25 01:42:33.813778 | orchestrator | } 2026-03-25 01:42:33.813784 | orchestrator | 2026-03-25 01:42:33.813790 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-25 01:42:33.813796 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-25 01:42:33.813803 | orchestrator | + admin_state_up = (known after apply) 2026-03-25 01:42:33.813809 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-25 01:42:33.813815 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-25 01:42:33.813821 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.813828 | orchestrator | + device_id = (known after apply) 2026-03-25 01:42:33.813834 | orchestrator | + device_owner = (known after apply) 2026-03-25 01:42:33.813840 | orchestrator | + dns_assignment = (known after apply) 2026-03-25 01:42:33.813846 | orchestrator | + dns_name = (known after apply) 2026-03-25 01:42:33.813852 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.813858 | orchestrator | + mac_address = (known after apply) 2026-03-25 01:42:33.813865 | orchestrator | + network_id = (known after apply) 2026-03-25 01:42:33.813871 | orchestrator | + port_security_enabled = (known after apply) 2026-03-25 01:42:33.813916 | orchestrator | + qos_policy_id = (known after apply) 2026-03-25 01:42:33.813924 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.813936 | orchestrator | + security_group_ids = (known after apply) 2026-03-25 01:42:33.813943 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.813949 | orchestrator | 2026-03-25 01:42:33.813955 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.813962 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-25 01:42:33.813968 | orchestrator | } 2026-03-25 01:42:33.813974 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.813980 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-25 01:42:33.813986 | orchestrator | } 2026-03-25 01:42:33.813993 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.813999 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-25 01:42:33.814005 | orchestrator | } 2026-03-25 01:42:33.814011 | orchestrator | 2026-03-25 01:42:33.814035 | orchestrator | + binding (known after apply) 2026-03-25 01:42:33.814041 | orchestrator | 2026-03-25 01:42:33.814049 | orchestrator | + fixed_ip { 2026-03-25 01:42:33.814059 | orchestrator | + ip_address = "192.168.16.10" 2026-03-25 01:42:33.814072 | orchestrator | + subnet_id = (known after apply) 2026-03-25 01:42:33.814086 | orchestrator | } 2026-03-25 01:42:33.814096 | orchestrator | } 2026-03-25 01:42:33.814106 | orchestrator | 2026-03-25 01:42:33.814116 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-25 01:42:33.814126 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-25 01:42:33.814136 | orchestrator | + admin_state_up = (known after apply) 2026-03-25 01:42:33.814145 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-25 01:42:33.814155 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-25 01:42:33.814165 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.814175 | orchestrator | + device_id = (known after apply) 2026-03-25 01:42:33.814184 | orchestrator | + device_owner = (known after apply) 2026-03-25 01:42:33.814194 | orchestrator | + dns_assignment = (known after apply) 2026-03-25 01:42:33.814204 | orchestrator | + dns_name = (known after apply) 2026-03-25 01:42:33.814215 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.814225 | orchestrator | + mac_address = (known after apply) 2026-03-25 01:42:33.814236 | orchestrator | + network_id = (known after apply) 2026-03-25 01:42:33.814246 | orchestrator | + port_security_enabled = (known after apply) 2026-03-25 01:42:33.814254 | orchestrator | + qos_policy_id = (known after apply) 2026-03-25 01:42:33.814263 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.814269 | orchestrator | + security_group_ids = (known after apply) 2026-03-25 01:42:33.814274 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.814280 | orchestrator | 2026-03-25 01:42:33.814285 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.814291 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-25 01:42:33.814296 | orchestrator | } 2026-03-25 01:42:33.814302 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.814307 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-25 01:42:33.814312 | orchestrator | } 2026-03-25 01:42:33.814318 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.814323 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-25 01:42:33.814328 | orchestrator | } 2026-03-25 01:42:33.814334 | orchestrator | 2026-03-25 01:42:33.814339 | orchestrator | + binding (known after apply) 2026-03-25 01:42:33.814345 | orchestrator | 2026-03-25 01:42:33.814350 | orchestrator | + fixed_ip { 2026-03-25 01:42:33.814355 | orchestrator | + ip_address = "192.168.16.11" 2026-03-25 01:42:33.814361 | orchestrator | + subnet_id = (known after apply) 2026-03-25 01:42:33.814366 | orchestrator | } 2026-03-25 01:42:33.814372 | orchestrator | } 2026-03-25 01:42:33.814377 | orchestrator | 2026-03-25 01:42:33.814383 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-25 01:42:33.814388 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-25 01:42:33.814394 | orchestrator | + admin_state_up = (known after apply) 2026-03-25 01:42:33.814406 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-25 01:42:33.814412 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-25 01:42:33.814418 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.814429 | orchestrator | + device_id = (known after apply) 2026-03-25 01:42:33.814435 | orchestrator | + device_owner = (known after apply) 2026-03-25 01:42:33.814440 | orchestrator | + dns_assignment = (known after apply) 2026-03-25 01:42:33.814446 | orchestrator | + dns_name = (known after apply) 2026-03-25 01:42:33.814456 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.814462 | orchestrator | + mac_address = (known after apply) 2026-03-25 01:42:33.814467 | orchestrator | + network_id = (known after apply) 2026-03-25 01:42:33.814473 | orchestrator | + port_security_enabled = (known after apply) 2026-03-25 01:42:33.814478 | orchestrator | + qos_policy_id = (known after apply) 2026-03-25 01:42:33.814484 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.814489 | orchestrator | + security_group_ids = (known after apply) 2026-03-25 01:42:33.814494 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.814500 | orchestrator | 2026-03-25 01:42:33.814505 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.814511 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-25 01:42:33.814516 | orchestrator | } 2026-03-25 01:42:33.814522 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.814527 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-25 01:42:33.814533 | orchestrator | } 2026-03-25 01:42:33.814538 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.814544 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-25 01:42:33.814549 | orchestrator | } 2026-03-25 01:42:33.814554 | orchestrator | 2026-03-25 01:42:33.814560 | orchestrator | + binding (known after apply) 2026-03-25 01:42:33.814565 | orchestrator | 2026-03-25 01:42:33.814571 | orchestrator | + fixed_ip { 2026-03-25 01:42:33.814576 | orchestrator | + ip_address = "192.168.16.12" 2026-03-25 01:42:33.814582 | orchestrator | + subnet_id = (known after apply) 2026-03-25 01:42:33.814587 | orchestrator | } 2026-03-25 01:42:33.814593 | orchestrator | } 2026-03-25 01:42:33.814598 | orchestrator | 2026-03-25 01:42:33.814603 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-25 01:42:33.814609 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-25 01:42:33.814695 | orchestrator | + admin_state_up = (known after apply) 2026-03-25 01:42:33.814701 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-25 01:42:33.814706 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-25 01:42:33.814712 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.814717 | orchestrator | + device_id = (known after apply) 2026-03-25 01:42:33.814723 | orchestrator | + device_owner = (known after apply) 2026-03-25 01:42:33.814728 | orchestrator | + dns_assignment = (known after apply) 2026-03-25 01:42:33.814733 | orchestrator | + dns_name = (known after apply) 2026-03-25 01:42:33.814739 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.814744 | orchestrator | + mac_address = (known after apply) 2026-03-25 01:42:33.814750 | orchestrator | + network_id = (known after apply) 2026-03-25 01:42:33.814755 | orchestrator | + port_security_enabled = (known after apply) 2026-03-25 01:42:33.814760 | orchestrator | + qos_policy_id = (known after apply) 2026-03-25 01:42:33.814766 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.814771 | orchestrator | + security_group_ids = (known after apply) 2026-03-25 01:42:33.814776 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.814782 | orchestrator | 2026-03-25 01:42:33.814787 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.814793 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-25 01:42:33.814798 | orchestrator | } 2026-03-25 01:42:33.814804 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.814809 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-25 01:42:33.814814 | orchestrator | } 2026-03-25 01:42:33.814820 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.814825 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-25 01:42:33.814830 | orchestrator | } 2026-03-25 01:42:33.814836 | orchestrator | 2026-03-25 01:42:33.814846 | orchestrator | + binding (known after apply) 2026-03-25 01:42:33.814851 | orchestrator | 2026-03-25 01:42:33.814857 | orchestrator | + fixed_ip { 2026-03-25 01:42:33.814862 | orchestrator | + ip_address = "192.168.16.13" 2026-03-25 01:42:33.814867 | orchestrator | + subnet_id = (known after apply) 2026-03-25 01:42:33.814873 | orchestrator | } 2026-03-25 01:42:33.814892 | orchestrator | } 2026-03-25 01:42:33.814897 | orchestrator | 2026-03-25 01:42:33.814903 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-25 01:42:33.814908 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-25 01:42:33.814914 | orchestrator | + admin_state_up = (known after apply) 2026-03-25 01:42:33.814919 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-25 01:42:33.814925 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-25 01:42:33.814930 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.814935 | orchestrator | + device_id = (known after apply) 2026-03-25 01:42:33.814941 | orchestrator | + device_owner = (known after apply) 2026-03-25 01:42:33.814946 | orchestrator | + dns_assignment = (known after apply) 2026-03-25 01:42:33.814951 | orchestrator | + dns_name = (known after apply) 2026-03-25 01:42:33.814957 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.814962 | orchestrator | + mac_address = (known after apply) 2026-03-25 01:42:33.814967 | orchestrator | + network_id = (known after apply) 2026-03-25 01:42:33.814973 | orchestrator | + port_security_enabled = (known after apply) 2026-03-25 01:42:33.814978 | orchestrator | + qos_policy_id = (known after apply) 2026-03-25 01:42:33.814984 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.814989 | orchestrator | + security_group_ids = (known after apply) 2026-03-25 01:42:33.814994 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.815001 | orchestrator | 2026-03-25 01:42:33.815006 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.815012 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-25 01:42:33.815017 | orchestrator | } 2026-03-25 01:42:33.815023 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.815028 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-25 01:42:33.815033 | orchestrator | } 2026-03-25 01:42:33.815039 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.815044 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-25 01:42:33.815050 | orchestrator | } 2026-03-25 01:42:33.815055 | orchestrator | 2026-03-25 01:42:33.815061 | orchestrator | + binding (known after apply) 2026-03-25 01:42:33.815066 | orchestrator | 2026-03-25 01:42:33.815071 | orchestrator | + fixed_ip { 2026-03-25 01:42:33.815077 | orchestrator | + ip_address = "192.168.16.14" 2026-03-25 01:42:33.815082 | orchestrator | + subnet_id = (known after apply) 2026-03-25 01:42:33.815088 | orchestrator | } 2026-03-25 01:42:33.815093 | orchestrator | } 2026-03-25 01:42:33.815099 | orchestrator | 2026-03-25 01:42:33.815104 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-25 01:42:33.815114 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-25 01:42:33.815120 | orchestrator | + admin_state_up = (known after apply) 2026-03-25 01:42:33.815125 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-25 01:42:33.815131 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-25 01:42:33.815136 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.815142 | orchestrator | + device_id = (known after apply) 2026-03-25 01:42:33.815147 | orchestrator | + device_owner = (known after apply) 2026-03-25 01:42:33.815152 | orchestrator | + dns_assignment = (known after apply) 2026-03-25 01:42:33.815158 | orchestrator | + dns_name = (known after apply) 2026-03-25 01:42:33.815163 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.815169 | orchestrator | + mac_address = (known after apply) 2026-03-25 01:42:33.815174 | orchestrator | + network_id = (known after apply) 2026-03-25 01:42:33.815180 | orchestrator | + port_security_enabled = (known after apply) 2026-03-25 01:42:33.815185 | orchestrator | + qos_policy_id = (known after apply) 2026-03-25 01:42:33.815194 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.815200 | orchestrator | + security_group_ids = (known after apply) 2026-03-25 01:42:33.815205 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.815210 | orchestrator | 2026-03-25 01:42:33.815216 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.815221 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-25 01:42:33.815227 | orchestrator | } 2026-03-25 01:42:33.815232 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.815237 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-25 01:42:33.815243 | orchestrator | } 2026-03-25 01:42:33.815248 | orchestrator | + allowed_address_pairs { 2026-03-25 01:42:33.815254 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-25 01:42:33.815259 | orchestrator | } 2026-03-25 01:42:33.815265 | orchestrator | 2026-03-25 01:42:33.815274 | orchestrator | + binding (known after apply) 2026-03-25 01:42:33.815280 | orchestrator | 2026-03-25 01:42:33.815285 | orchestrator | + fixed_ip { 2026-03-25 01:42:33.815291 | orchestrator | + ip_address = "192.168.16.15" 2026-03-25 01:42:33.815296 | orchestrator | + subnet_id = (known after apply) 2026-03-25 01:42:33.815301 | orchestrator | } 2026-03-25 01:42:33.815307 | orchestrator | } 2026-03-25 01:42:33.815312 | orchestrator | 2026-03-25 01:42:33.815318 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-25 01:42:33.815323 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-25 01:42:33.815329 | orchestrator | + force_destroy = false 2026-03-25 01:42:33.815334 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.815340 | orchestrator | + port_id = (known after apply) 2026-03-25 01:42:33.815345 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.815351 | orchestrator | + router_id = (known after apply) 2026-03-25 01:42:33.815356 | orchestrator | + subnet_id = (known after apply) 2026-03-25 01:42:33.815361 | orchestrator | } 2026-03-25 01:42:33.815367 | orchestrator | 2026-03-25 01:42:33.815372 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-25 01:42:33.815378 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-25 01:42:33.815383 | orchestrator | + admin_state_up = (known after apply) 2026-03-25 01:42:33.815389 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.815394 | orchestrator | + availability_zone_hints = [ 2026-03-25 01:42:33.815399 | orchestrator | + "nova", 2026-03-25 01:42:33.815405 | orchestrator | ] 2026-03-25 01:42:33.815410 | orchestrator | + distributed = (known after apply) 2026-03-25 01:42:33.815416 | orchestrator | + enable_snat = (known after apply) 2026-03-25 01:42:33.815421 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-25 01:42:33.815427 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-25 01:42:33.815432 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.815437 | orchestrator | + name = "testbed" 2026-03-25 01:42:33.815443 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.815448 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.815454 | orchestrator | 2026-03-25 01:42:33.815459 | orchestrator | + external_fixed_ip (known after apply) 2026-03-25 01:42:33.815464 | orchestrator | } 2026-03-25 01:42:33.815470 | orchestrator | 2026-03-25 01:42:33.815475 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-25 01:42:33.815482 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-25 01:42:33.815487 | orchestrator | + description = "ssh" 2026-03-25 01:42:33.815492 | orchestrator | + direction = "ingress" 2026-03-25 01:42:33.815498 | orchestrator | + ethertype = "IPv4" 2026-03-25 01:42:33.815503 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.815509 | orchestrator | + port_range_max = 22 2026-03-25 01:42:33.815514 | orchestrator | + port_range_min = 22 2026-03-25 01:42:33.815520 | orchestrator | + protocol = "tcp" 2026-03-25 01:42:33.815525 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.815535 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-25 01:42:33.815541 | orchestrator | + remote_group_id = (known after apply) 2026-03-25 01:42:33.815546 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-25 01:42:33.815552 | orchestrator | + security_group_id = (known after apply) 2026-03-25 01:42:33.815557 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.815562 | orchestrator | } 2026-03-25 01:42:33.815568 | orchestrator | 2026-03-25 01:42:33.815573 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-25 01:42:33.815579 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-25 01:42:33.815584 | orchestrator | + description = "wireguard" 2026-03-25 01:42:33.815590 | orchestrator | + direction = "ingress" 2026-03-25 01:42:33.815595 | orchestrator | + ethertype = "IPv4" 2026-03-25 01:42:33.815600 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.815606 | orchestrator | + port_range_max = 51820 2026-03-25 01:42:33.815611 | orchestrator | + port_range_min = 51820 2026-03-25 01:42:33.815617 | orchestrator | + protocol = "udp" 2026-03-25 01:42:33.815622 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.815627 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-25 01:42:33.815633 | orchestrator | + remote_group_id = (known after apply) 2026-03-25 01:42:33.815639 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-25 01:42:33.815644 | orchestrator | + security_group_id = (known after apply) 2026-03-25 01:42:33.815653 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.815659 | orchestrator | } 2026-03-25 01:42:33.815664 | orchestrator | 2026-03-25 01:42:33.815670 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-25 01:42:33.815675 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-25 01:42:33.815680 | orchestrator | + direction = "ingress" 2026-03-25 01:42:33.815686 | orchestrator | + ethertype = "IPv4" 2026-03-25 01:42:33.815691 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.815697 | orchestrator | + protocol = "tcp" 2026-03-25 01:42:33.815702 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.815708 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-25 01:42:33.815713 | orchestrator | + remote_group_id = (known after apply) 2026-03-25 01:42:33.815719 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-25 01:42:33.815781 | orchestrator | + security_group_id = (known after apply) 2026-03-25 01:42:33.815789 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.815794 | orchestrator | } 2026-03-25 01:42:33.815800 | orchestrator | 2026-03-25 01:42:33.815805 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-25 01:42:33.815811 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-25 01:42:33.815816 | orchestrator | + direction = "ingress" 2026-03-25 01:42:33.815821 | orchestrator | + ethertype = "IPv4" 2026-03-25 01:42:33.815827 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.815832 | orchestrator | + protocol = "udp" 2026-03-25 01:42:33.815838 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.815843 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-25 01:42:33.815849 | orchestrator | + remote_group_id = (known after apply) 2026-03-25 01:42:33.815854 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-25 01:42:33.815859 | orchestrator | + security_group_id = (known after apply) 2026-03-25 01:42:33.815865 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.815870 | orchestrator | } 2026-03-25 01:42:33.815876 | orchestrator | 2026-03-25 01:42:33.815896 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-25 01:42:33.815907 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-25 01:42:33.815913 | orchestrator | + direction = "ingress" 2026-03-25 01:42:33.815918 | orchestrator | + ethertype = "IPv4" 2026-03-25 01:42:33.815924 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.815929 | orchestrator | + protocol = "icmp" 2026-03-25 01:42:33.815935 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.815940 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-25 01:42:33.815945 | orchestrator | + remote_group_id = (known after apply) 2026-03-25 01:42:33.815951 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-25 01:42:33.815956 | orchestrator | + security_group_id = (known after apply) 2026-03-25 01:42:33.815962 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.815967 | orchestrator | } 2026-03-25 01:42:33.815973 | orchestrator | 2026-03-25 01:42:33.815978 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-25 01:42:33.815984 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-25 01:42:33.815989 | orchestrator | + direction = "ingress" 2026-03-25 01:42:33.815995 | orchestrator | + ethertype = "IPv4" 2026-03-25 01:42:33.816000 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.816006 | orchestrator | + protocol = "tcp" 2026-03-25 01:42:33.816011 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.816017 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-25 01:42:33.816025 | orchestrator | + remote_group_id = (known after apply) 2026-03-25 01:42:33.816031 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-25 01:42:33.816037 | orchestrator | + security_group_id = (known after apply) 2026-03-25 01:42:33.816042 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.816048 | orchestrator | } 2026-03-25 01:42:33.816053 | orchestrator | 2026-03-25 01:42:33.816059 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-25 01:42:33.816064 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-25 01:42:33.816070 | orchestrator | + direction = "ingress" 2026-03-25 01:42:33.816075 | orchestrator | + ethertype = "IPv4" 2026-03-25 01:42:33.816080 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.816086 | orchestrator | + protocol = "udp" 2026-03-25 01:42:33.816091 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.816097 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-25 01:42:33.816102 | orchestrator | + remote_group_id = (known after apply) 2026-03-25 01:42:33.816108 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-25 01:42:33.816113 | orchestrator | + security_group_id = (known after apply) 2026-03-25 01:42:33.816119 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.816124 | orchestrator | } 2026-03-25 01:42:33.816130 | orchestrator | 2026-03-25 01:42:33.816135 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-25 01:42:33.816141 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-25 01:42:33.816146 | orchestrator | + direction = "ingress" 2026-03-25 01:42:33.816154 | orchestrator | + ethertype = "IPv4" 2026-03-25 01:42:33.816160 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.816165 | orchestrator | + protocol = "icmp" 2026-03-25 01:42:33.816171 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.816176 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-25 01:42:33.816182 | orchestrator | + remote_group_id = (known after apply) 2026-03-25 01:42:33.816187 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-25 01:42:33.816192 | orchestrator | + security_group_id = (known after apply) 2026-03-25 01:42:33.816198 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.816210 | orchestrator | } 2026-03-25 01:42:33.816216 | orchestrator | 2026-03-25 01:42:33.816226 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-25 01:42:33.816232 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-25 01:42:33.816237 | orchestrator | + description = "vrrp" 2026-03-25 01:42:33.816242 | orchestrator | + direction = "ingress" 2026-03-25 01:42:33.816248 | orchestrator | + ethertype = "IPv4" 2026-03-25 01:42:33.816253 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.816259 | orchestrator | + protocol = "112" 2026-03-25 01:42:33.816264 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.816270 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-25 01:42:33.816275 | orchestrator | + remote_group_id = (known after apply) 2026-03-25 01:42:33.816280 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-25 01:42:33.816286 | orchestrator | + security_group_id = (known after apply) 2026-03-25 01:42:33.816291 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.816296 | orchestrator | } 2026-03-25 01:42:33.816302 | orchestrator | 2026-03-25 01:42:33.816307 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-25 01:42:33.816313 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-25 01:42:33.816318 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.816324 | orchestrator | + description = "management security group" 2026-03-25 01:42:33.816329 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.816335 | orchestrator | + name = "testbed-management" 2026-03-25 01:42:33.816340 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.816345 | orchestrator | + stateful = (known after apply) 2026-03-25 01:42:33.816351 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.816356 | orchestrator | } 2026-03-25 01:42:33.816362 | orchestrator | 2026-03-25 01:42:33.816367 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-25 01:42:33.816372 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-25 01:42:33.816378 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.816383 | orchestrator | + description = "node security group" 2026-03-25 01:42:33.816389 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.816394 | orchestrator | + name = "testbed-node" 2026-03-25 01:42:33.816400 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.816405 | orchestrator | + stateful = (known after apply) 2026-03-25 01:42:33.816410 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.816416 | orchestrator | } 2026-03-25 01:42:33.816421 | orchestrator | 2026-03-25 01:42:33.816426 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-25 01:42:33.816432 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-25 01:42:33.816437 | orchestrator | + all_tags = (known after apply) 2026-03-25 01:42:33.816443 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-25 01:42:33.816448 | orchestrator | + dns_nameservers = [ 2026-03-25 01:42:33.816454 | orchestrator | + "8.8.8.8", 2026-03-25 01:42:33.816459 | orchestrator | + "9.9.9.9", 2026-03-25 01:42:33.816465 | orchestrator | ] 2026-03-25 01:42:33.816470 | orchestrator | + enable_dhcp = true 2026-03-25 01:42:33.816476 | orchestrator | + gateway_ip = (known after apply) 2026-03-25 01:42:33.816481 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.816487 | orchestrator | + ip_version = 4 2026-03-25 01:42:33.816492 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-25 01:42:33.816497 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-25 01:42:33.816503 | orchestrator | + name = "subnet-testbed-management" 2026-03-25 01:42:33.816508 | orchestrator | + network_id = (known after apply) 2026-03-25 01:42:33.816514 | orchestrator | + no_gateway = false 2026-03-25 01:42:33.816519 | orchestrator | + region = (known after apply) 2026-03-25 01:42:33.816525 | orchestrator | + service_types = (known after apply) 2026-03-25 01:42:33.816534 | orchestrator | + tenant_id = (known after apply) 2026-03-25 01:42:33.816539 | orchestrator | 2026-03-25 01:42:33.816545 | orchestrator | + allocation_pool { 2026-03-25 01:42:33.816550 | orchestrator | + end = "192.168.31.250" 2026-03-25 01:42:33.816556 | orchestrator | + start = "192.168.31.200" 2026-03-25 01:42:33.816561 | orchestrator | } 2026-03-25 01:42:33.816567 | orchestrator | } 2026-03-25 01:42:33.816572 | orchestrator | 2026-03-25 01:42:33.816577 | orchestrator | # terraform_data.image will be created 2026-03-25 01:42:33.816583 | orchestrator | + resource "terraform_data" "image" { 2026-03-25 01:42:33.816588 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.816593 | orchestrator | + input = "Ubuntu 24.04" 2026-03-25 01:42:33.816599 | orchestrator | + output = (known after apply) 2026-03-25 01:42:33.816604 | orchestrator | } 2026-03-25 01:42:33.816610 | orchestrator | 2026-03-25 01:42:33.816615 | orchestrator | # terraform_data.image_node will be created 2026-03-25 01:42:33.816621 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-25 01:42:33.816626 | orchestrator | + id = (known after apply) 2026-03-25 01:42:33.816631 | orchestrator | + input = "Ubuntu 24.04" 2026-03-25 01:42:33.816637 | orchestrator | + output = (known after apply) 2026-03-25 01:42:33.816642 | orchestrator | } 2026-03-25 01:42:33.816647 | orchestrator | 2026-03-25 01:42:33.816653 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-25 01:42:33.816658 | orchestrator | 2026-03-25 01:42:33.816664 | orchestrator | Changes to Outputs: 2026-03-25 01:42:33.816669 | orchestrator | + manager_address = (sensitive value) 2026-03-25 01:42:33.816674 | orchestrator | + private_key = (sensitive value) 2026-03-25 01:42:34.060977 | orchestrator | terraform_data.image_node: Creating... 2026-03-25 01:42:34.061488 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=4f9d796f-172b-a57c-ec68-822f13f1c3ca] 2026-03-25 01:42:34.062960 | orchestrator | terraform_data.image: Creating... 2026-03-25 01:42:34.063309 | orchestrator | terraform_data.image: Creation complete after 0s [id=90f680fa-65e2-3625-9178-0cbde129b3f4] 2026-03-25 01:42:34.080655 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-25 01:42:34.081780 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-25 01:42:34.093195 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-25 01:42:34.095541 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-25 01:42:34.095939 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-25 01:42:34.096126 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-25 01:42:34.097155 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-25 01:42:34.099990 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-25 01:42:34.100265 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-25 01:42:34.105459 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-25 01:42:34.540611 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-25 01:42:34.542859 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-25 01:42:34.548018 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-25 01:42:34.549399 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-25 01:42:34.620586 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-25 01:42:34.627272 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-25 01:42:35.089643 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=8f7cb14e-d51f-4d98-89f3-b7cfa2166438] 2026-03-25 01:42:35.103753 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-25 01:42:37.733375 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=eaa5e6a9-2c24-4b33-854e-103871b2e9c6] 2026-03-25 01:42:37.735034 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=37f05188-2a00-44e2-a0b8-7549f9da5347] 2026-03-25 01:42:37.739060 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-25 01:42:37.746898 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-25 01:42:37.748696 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=82545a3e-e213-461e-98f1-90cf18f03519] 2026-03-25 01:42:37.751780 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=99e65ea9-8a8c-4114-a95e-6d6b779e8981] 2026-03-25 01:42:37.753263 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-25 01:42:37.755529 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-25 01:42:37.770426 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=fd5367dc-993e-4d7d-b2a6-757e2a17e9b7] 2026-03-25 01:42:37.777984 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-25 01:42:37.792752 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=10d736b4-dcf8-42aa-aae6-a1381d72468f] 2026-03-25 01:42:37.804410 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-25 01:42:37.806910 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=3e1f7d9f-c106-4693-b0da-d762a5de4a11] 2026-03-25 01:42:37.811287 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=3ba83ae173ec074ba0c44bbbbc1d7836ff39565e] 2026-03-25 01:42:37.819909 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-25 01:42:37.821322 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-25 01:42:37.824455 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=d8b292800841c4dbc4cdad18c60e6b010d624cb7] 2026-03-25 01:42:37.828535 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=e0cf0e31-edea-4833-ac86-8b3021cd24a1] 2026-03-25 01:42:37.834087 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-25 01:42:37.856309 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=04cbe055-706b-4644-9107-d77d79be5a29] 2026-03-25 01:42:38.455372 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=64e9f395-f2d8-41f9-9a3f-57dc675ebeec] 2026-03-25 01:42:38.627388 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=e19d7cf6-4c12-41e8-916f-e0d50e3ca7e5] 2026-03-25 01:42:38.639089 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-25 01:42:40.998073 | orchestrator | openstack_networking_router_v2.router: Creation complete after 2s [id=192d720f-6de5-4410-aa69-837e4c59235c] 2026-03-25 01:42:41.002142 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-25 01:42:41.005173 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-25 01:42:41.005426 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-25 01:42:41.114374 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=2a85f599-c628-4cff-bf05-087f83983aef] 2026-03-25 01:42:41.143735 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2] 2026-03-25 01:42:41.167968 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2] 2026-03-25 01:42:41.168986 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=5418d243-c22a-425d-8a7d-7c43bd549130] 2026-03-25 01:42:41.193335 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=225bc811-b117-4ab1-9890-e393d3b780be] 2026-03-25 01:42:41.193423 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=6f4dac66-7d77-405d-9806-2892d264117f] 2026-03-25 01:42:41.205492 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-25 01:42:41.205593 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-25 01:42:41.207033 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-25 01:42:41.207652 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-25 01:42:41.212281 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-25 01:42:41.212389 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-25 01:42:41.221070 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-25 01:42:41.223858 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=6cb51c54-ae34-41ee-aa7a-55f1cdeeb529] 2026-03-25 01:42:41.232008 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-25 01:42:41.271070 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=1f25cc5b-763b-456e-8965-ac058741e897] 2026-03-25 01:42:41.280066 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-25 01:42:41.357761 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=7ebf3ed3-db8f-49c8-b666-5ff868eae685] 2026-03-25 01:42:41.372821 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-25 01:42:41.738706 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=991acfd9-af0b-44b0-99fa-3781adad2d5b] 2026-03-25 01:42:41.748237 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-25 01:42:41.780396 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=7621e2cc-ae7c-4b0e-b9f5-bdd713117f41] 2026-03-25 01:42:41.786324 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-25 01:42:41.800539 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=a665f337-096c-465b-b6bc-a9dacc616441] 2026-03-25 01:42:41.808645 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-25 01:42:41.986514 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=e90ee1b6-155d-4851-b3a8-4e740cc06b0b] 2026-03-25 01:42:41.993668 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=a8c89ded-c61a-4969-b963-29d4a9d385e6] 2026-03-25 01:42:41.995779 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-25 01:42:42.000987 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=a8e0ccc3-8cd0-436e-93bc-427815737653] 2026-03-25 01:42:42.004477 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-25 01:42:42.005265 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-25 01:42:42.024394 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=6d390baf-472d-4bdf-8913-81a17cfafbe3] 2026-03-25 01:42:42.157556 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=b3909726-6a06-4182-b1c1-0f911ae6522f] 2026-03-25 01:42:42.210624 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=72536039-5c22-4a0a-ba2e-629d5e0a17a6] 2026-03-25 01:42:42.354179 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=6480b9a6-9a76-4bd9-becd-796d575709a2] 2026-03-25 01:42:42.506803 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=c93a1bcc-f3dc-4f68-8f45-cd7dac9bbcaa] 2026-03-25 01:42:42.541766 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=e1501d83-a934-4f3d-a995-eb36a67e4ef7] 2026-03-25 01:42:42.669667 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=5026a921-f528-4489-819e-c293429e2116] 2026-03-25 01:42:42.726660 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=ca8681fe-7223-4195-a961-24b6ba567b89] 2026-03-25 01:42:42.744833 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=1ab17c7c-2549-4da4-a146-779432fb1b7d] 2026-03-25 01:42:43.516522 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=2046ddc7-e1bd-4677-84a7-8ad3787aa270] 2026-03-25 01:42:43.543984 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-25 01:42:43.546949 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-25 01:42:43.547418 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-25 01:42:43.555736 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-25 01:42:43.558449 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-25 01:42:43.571420 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-25 01:42:43.575671 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-25 01:42:44.905745 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=0833637e-fc3e-4303-9377-d4a59ca8d175] 2026-03-25 01:42:44.912730 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-25 01:42:44.923500 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-25 01:42:44.924317 | orchestrator | local_file.inventory: Creating... 2026-03-25 01:42:44.928143 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=c4e3b35b49ce7687a43f620e10208f83b3f95282] 2026-03-25 01:42:44.928383 | orchestrator | local_file.inventory: Creation complete after 0s [id=02a12c5a4eda5b0e39042056d8b8c3595b36bae6] 2026-03-25 01:42:45.691145 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=0833637e-fc3e-4303-9377-d4a59ca8d175] 2026-03-25 01:42:53.548918 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-25 01:42:53.549835 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-25 01:42:53.559191 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-25 01:42:53.559298 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-25 01:42:53.572748 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-25 01:42:53.576036 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-25 01:43:03.552607 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-25 01:43:03.552731 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-25 01:43:03.560259 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-25 01:43:03.560343 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-25 01:43:03.573334 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-25 01:43:03.576672 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-25 01:43:04.132868 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=8e172664-06c1-4bb0-813a-1fd375bc0a40] 2026-03-25 01:43:04.251546 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=4ae17092-0ae2-48ed-a3c3-ceb8e81a12e1] 2026-03-25 01:43:13.561924 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-25 01:43:13.562113 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-25 01:43:13.574347 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-25 01:43:13.577603 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-25 01:43:14.226406 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=998e75b4-b2fe-4fd9-9283-1606c192e8e8] 2026-03-25 01:43:14.249211 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=ece6e204-3e84-4f1d-bc39-3cd3eb4b4a7c] 2026-03-25 01:43:14.311656 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=d4d3b354-789a-415e-aa93-54a871e2d91d] 2026-03-25 01:43:14.441778 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=a2034906-4433-48a8-9405-477884be7dc7] 2026-03-25 01:43:14.472707 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-25 01:43:14.480791 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-25 01:43:14.484477 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-25 01:43:14.484756 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-25 01:43:14.491632 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-25 01:43:14.494076 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2329013842545903539] 2026-03-25 01:43:14.494135 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-25 01:43:14.494146 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-25 01:43:14.494155 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-25 01:43:14.499141 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-25 01:43:14.503367 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-25 01:43:14.532935 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-25 01:43:17.858265 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=d4d3b354-789a-415e-aa93-54a871e2d91d/3e1f7d9f-c106-4693-b0da-d762a5de4a11] 2026-03-25 01:43:17.871992 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=8e172664-06c1-4bb0-813a-1fd375bc0a40/99e65ea9-8a8c-4114-a95e-6d6b779e8981] 2026-03-25 01:43:17.893041 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=a2034906-4433-48a8-9405-477884be7dc7/82545a3e-e213-461e-98f1-90cf18f03519] 2026-03-25 01:43:17.896540 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=8e172664-06c1-4bb0-813a-1fd375bc0a40/eaa5e6a9-2c24-4b33-854e-103871b2e9c6] 2026-03-25 01:43:17.917192 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=d4d3b354-789a-415e-aa93-54a871e2d91d/37f05188-2a00-44e2-a0b8-7549f9da5347] 2026-03-25 01:43:17.941081 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=a2034906-4433-48a8-9405-477884be7dc7/fd5367dc-993e-4d7d-b2a6-757e2a17e9b7] 2026-03-25 01:43:23.998465 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=8e172664-06c1-4bb0-813a-1fd375bc0a40/e0cf0e31-edea-4833-ac86-8b3021cd24a1] 2026-03-25 01:43:24.043053 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=d4d3b354-789a-415e-aa93-54a871e2d91d/10d736b4-dcf8-42aa-aae6-a1381d72468f] 2026-03-25 01:43:24.057663 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=a2034906-4433-48a8-9405-477884be7dc7/04cbe055-706b-4644-9107-d77d79be5a29] 2026-03-25 01:43:24.535548 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-25 01:43:34.536719 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-25 01:43:34.937354 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=afcb2717-a946-49a5-8fb0-f0539967d39c] 2026-03-25 01:43:34.954660 | orchestrator | 2026-03-25 01:43:34.954714 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-25 01:43:34.954748 | orchestrator | 2026-03-25 01:43:34.954756 | orchestrator | Outputs: 2026-03-25 01:43:34.954762 | orchestrator | 2026-03-25 01:43:34.954782 | orchestrator | manager_address = 2026-03-25 01:43:34.954788 | orchestrator | private_key = 2026-03-25 01:43:35.389994 | orchestrator | ok: Runtime: 0:01:10.156569 2026-03-25 01:43:35.424404 | 2026-03-25 01:43:35.424520 | TASK [Fetch manager address] 2026-03-25 01:43:35.906891 | orchestrator | ok 2026-03-25 01:43:35.916548 | 2026-03-25 01:43:35.916669 | TASK [Set manager_host address] 2026-03-25 01:43:35.989591 | orchestrator | ok 2026-03-25 01:43:35.998078 | 2026-03-25 01:43:35.998188 | LOOP [Update ansible collections] 2026-03-25 01:43:36.990619 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-25 01:43:36.991084 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-25 01:43:36.991162 | orchestrator | Starting galaxy collection install process 2026-03-25 01:43:36.991214 | orchestrator | Process install dependency map 2026-03-25 01:43:36.991260 | orchestrator | Starting collection install process 2026-03-25 01:43:36.991301 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-25 01:43:36.991348 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-25 01:43:36.991398 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-25 01:43:36.991497 | orchestrator | ok: Item: commons Runtime: 0:00:00.620201 2026-03-25 01:43:37.888493 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-25 01:43:37.888668 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-25 01:43:37.888737 | orchestrator | Starting galaxy collection install process 2026-03-25 01:43:37.888778 | orchestrator | Process install dependency map 2026-03-25 01:43:37.888816 | orchestrator | Starting collection install process 2026-03-25 01:43:37.888852 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-25 01:43:37.888887 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-25 01:43:37.888920 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-25 01:43:37.888975 | orchestrator | ok: Item: services Runtime: 0:00:00.633293 2026-03-25 01:43:37.913170 | 2026-03-25 01:43:37.913340 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-25 01:43:48.500505 | orchestrator | ok 2026-03-25 01:43:48.511434 | 2026-03-25 01:43:48.511562 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-25 01:44:48.554641 | orchestrator | ok 2026-03-25 01:44:48.565234 | 2026-03-25 01:44:48.565358 | TASK [Fetch manager ssh hostkey] 2026-03-25 01:44:50.139631 | orchestrator | Output suppressed because no_log was given 2026-03-25 01:44:50.147715 | 2026-03-25 01:44:50.147857 | TASK [Get ssh keypair from terraform environment] 2026-03-25 01:44:50.681278 | orchestrator | ok: Runtime: 0:00:00.010900 2026-03-25 01:44:50.698101 | 2026-03-25 01:44:50.698262 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-25 01:44:50.735626 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-25 01:44:50.744656 | 2026-03-25 01:44:50.744793 | TASK [Run manager part 0] 2026-03-25 01:44:51.721932 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-25 01:44:51.772024 | orchestrator | 2026-03-25 01:44:51.772084 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-25 01:44:51.772094 | orchestrator | 2026-03-25 01:44:51.772114 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-25 01:44:53.819037 | orchestrator | ok: [testbed-manager] 2026-03-25 01:44:53.819174 | orchestrator | 2026-03-25 01:44:53.819205 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-25 01:44:53.819214 | orchestrator | 2026-03-25 01:44:53.819224 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 01:44:55.820715 | orchestrator | ok: [testbed-manager] 2026-03-25 01:44:55.820779 | orchestrator | 2026-03-25 01:44:55.820786 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-25 01:44:56.564983 | orchestrator | ok: [testbed-manager] 2026-03-25 01:44:56.565042 | orchestrator | 2026-03-25 01:44:56.565050 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-25 01:44:56.607454 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:44:56.607511 | orchestrator | 2026-03-25 01:44:56.607523 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-25 01:44:56.633470 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:44:56.633527 | orchestrator | 2026-03-25 01:44:56.633537 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-25 01:44:56.659475 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:44:56.659530 | orchestrator | 2026-03-25 01:44:56.659540 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-25 01:44:56.695324 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:44:56.695377 | orchestrator | 2026-03-25 01:44:56.695382 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-25 01:44:56.729777 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:44:56.729841 | orchestrator | 2026-03-25 01:44:56.729851 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-25 01:44:56.779287 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:44:56.779361 | orchestrator | 2026-03-25 01:44:56.779376 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-25 01:44:56.827894 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:44:56.827990 | orchestrator | 2026-03-25 01:44:56.828004 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-25 01:44:57.620099 | orchestrator | changed: [testbed-manager] 2026-03-25 01:44:57.620179 | orchestrator | 2026-03-25 01:44:57.620194 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-25 01:48:09.185746 | orchestrator | changed: [testbed-manager] 2026-03-25 01:48:09.185856 | orchestrator | 2026-03-25 01:48:09.185875 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-25 01:49:35.052492 | orchestrator | changed: [testbed-manager] 2026-03-25 01:49:35.052596 | orchestrator | 2026-03-25 01:49:35.052613 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-25 01:50:00.664372 | orchestrator | changed: [testbed-manager] 2026-03-25 01:50:00.664448 | orchestrator | 2026-03-25 01:50:00.664461 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-25 01:50:11.352649 | orchestrator | changed: [testbed-manager] 2026-03-25 01:50:11.352732 | orchestrator | 2026-03-25 01:50:11.352744 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-25 01:50:11.404094 | orchestrator | ok: [testbed-manager] 2026-03-25 01:50:11.404190 | orchestrator | 2026-03-25 01:50:11.404207 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-25 01:50:12.254534 | orchestrator | ok: [testbed-manager] 2026-03-25 01:50:12.254647 | orchestrator | 2026-03-25 01:50:12.254674 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-25 01:50:12.997937 | orchestrator | changed: [testbed-manager] 2026-03-25 01:50:12.998112 | orchestrator | 2026-03-25 01:50:12.998142 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-25 01:50:20.183196 | orchestrator | changed: [testbed-manager] 2026-03-25 01:50:20.183317 | orchestrator | 2026-03-25 01:50:20.183371 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-25 01:50:26.989013 | orchestrator | changed: [testbed-manager] 2026-03-25 01:50:26.989119 | orchestrator | 2026-03-25 01:50:26.989143 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-25 01:50:29.880602 | orchestrator | changed: [testbed-manager] 2026-03-25 01:50:29.880682 | orchestrator | 2026-03-25 01:50:29.880695 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-25 01:50:31.918971 | orchestrator | changed: [testbed-manager] 2026-03-25 01:50:31.919027 | orchestrator | 2026-03-25 01:50:31.919039 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-25 01:50:33.130815 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-25 01:50:33.130903 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-25 01:50:33.130913 | orchestrator | 2026-03-25 01:50:33.130919 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-25 01:50:33.177004 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-25 01:50:33.177081 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-25 01:50:33.177094 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-25 01:50:33.177105 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-25 01:50:36.608372 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-25 01:50:36.608473 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-25 01:50:36.608489 | orchestrator | 2026-03-25 01:50:36.608502 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-25 01:50:37.205097 | orchestrator | changed: [testbed-manager] 2026-03-25 01:50:37.205217 | orchestrator | 2026-03-25 01:50:37.205246 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-25 01:50:56.002305 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-25 01:50:56.003133 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-25 01:50:56.003180 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-25 01:50:56.003196 | orchestrator | 2026-03-25 01:50:56.003211 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-25 01:50:58.453614 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-25 01:50:58.453664 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-25 01:50:58.453671 | orchestrator | 2026-03-25 01:50:58.453678 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-25 01:50:58.453685 | orchestrator | 2026-03-25 01:50:58.453691 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 01:50:59.890384 | orchestrator | ok: [testbed-manager] 2026-03-25 01:50:59.890472 | orchestrator | 2026-03-25 01:50:59.890489 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-25 01:50:59.938143 | orchestrator | ok: [testbed-manager] 2026-03-25 01:50:59.938215 | orchestrator | 2026-03-25 01:50:59.938227 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-25 01:51:00.049121 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:00.049225 | orchestrator | 2026-03-25 01:51:00.049246 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-25 01:51:00.830246 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:00.830295 | orchestrator | 2026-03-25 01:51:00.830304 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-25 01:51:01.724553 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:01.724650 | orchestrator | 2026-03-25 01:51:01.724669 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-25 01:51:03.218425 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-25 01:51:03.218515 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-25 01:51:03.218532 | orchestrator | 2026-03-25 01:51:03.218564 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-25 01:51:04.635264 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:04.635498 | orchestrator | 2026-03-25 01:51:04.635529 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-25 01:51:06.520723 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-25 01:51:06.520798 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-25 01:51:06.520807 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-25 01:51:06.520813 | orchestrator | 2026-03-25 01:51:06.520821 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-25 01:51:06.584256 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:06.584334 | orchestrator | 2026-03-25 01:51:06.584342 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-25 01:51:06.655352 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:06.655423 | orchestrator | 2026-03-25 01:51:06.655432 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-25 01:51:07.264328 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:07.264413 | orchestrator | 2026-03-25 01:51:07.264426 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-25 01:51:07.342098 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:07.342188 | orchestrator | 2026-03-25 01:51:07.342200 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-25 01:51:08.277522 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-25 01:51:08.277622 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:08.277639 | orchestrator | 2026-03-25 01:51:08.277651 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-25 01:51:08.317957 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:08.318107 | orchestrator | 2026-03-25 01:51:08.318127 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-25 01:51:08.359020 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:08.359123 | orchestrator | 2026-03-25 01:51:08.359138 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-25 01:51:08.393007 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:08.393106 | orchestrator | 2026-03-25 01:51:08.393130 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-25 01:51:08.473602 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:08.473705 | orchestrator | 2026-03-25 01:51:08.473721 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-25 01:51:09.250982 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:09.251087 | orchestrator | 2026-03-25 01:51:09.251115 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-25 01:51:09.251135 | orchestrator | 2026-03-25 01:51:09.251154 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 01:51:10.743274 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:10.743369 | orchestrator | 2026-03-25 01:51:10.743385 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-25 01:51:11.817501 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:11.817572 | orchestrator | 2026-03-25 01:51:11.817582 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 01:51:11.817591 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-25 01:51:11.817598 | orchestrator | 2026-03-25 01:51:12.020817 | orchestrator | ok: Runtime: 0:06:20.855795 2026-03-25 01:51:12.042295 | 2026-03-25 01:51:12.042490 | TASK [Point out that the log in on the manager is now possible] 2026-03-25 01:51:12.080896 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-25 01:51:12.090304 | 2026-03-25 01:51:12.090420 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-25 01:51:12.129962 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-25 01:51:12.140775 | 2026-03-25 01:51:12.140906 | TASK [Run manager part 1 + 2] 2026-03-25 01:51:13.035147 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-25 01:51:13.096304 | orchestrator | 2026-03-25 01:51:13.096356 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-25 01:51:13.096364 | orchestrator | 2026-03-25 01:51:13.096377 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 01:51:15.721160 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:15.721212 | orchestrator | 2026-03-25 01:51:15.721234 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-25 01:51:15.758001 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:15.758071 | orchestrator | 2026-03-25 01:51:15.758079 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-25 01:51:15.791974 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:15.792027 | orchestrator | 2026-03-25 01:51:15.792037 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-25 01:51:15.831074 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:15.831133 | orchestrator | 2026-03-25 01:51:15.831143 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-25 01:51:15.922634 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:15.922696 | orchestrator | 2026-03-25 01:51:15.922708 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-25 01:51:15.986428 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:15.986490 | orchestrator | 2026-03-25 01:51:15.986502 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-25 01:51:16.040020 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-25 01:51:16.040069 | orchestrator | 2026-03-25 01:51:16.040075 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-25 01:51:16.796131 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:16.796187 | orchestrator | 2026-03-25 01:51:16.796197 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-25 01:51:16.846476 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:16.846528 | orchestrator | 2026-03-25 01:51:16.846536 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-25 01:51:18.337306 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:18.337364 | orchestrator | 2026-03-25 01:51:18.337376 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-25 01:51:18.913058 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:18.913122 | orchestrator | 2026-03-25 01:51:18.913137 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-25 01:51:20.119967 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:20.120059 | orchestrator | 2026-03-25 01:51:20.120079 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-25 01:51:36.580491 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:36.580537 | orchestrator | 2026-03-25 01:51:36.580545 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-25 01:51:37.302585 | orchestrator | ok: [testbed-manager] 2026-03-25 01:51:37.302714 | orchestrator | 2026-03-25 01:51:37.302723 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-25 01:51:37.356907 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:37.356950 | orchestrator | 2026-03-25 01:51:37.356958 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-25 01:51:38.410521 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:38.410569 | orchestrator | 2026-03-25 01:51:38.410577 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-25 01:51:39.434006 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:39.434079 | orchestrator | 2026-03-25 01:51:39.434089 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-25 01:51:40.055719 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:40.055817 | orchestrator | 2026-03-25 01:51:40.055834 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-25 01:51:40.094417 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-25 01:51:40.094549 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-25 01:51:40.094576 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-25 01:51:40.094598 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-25 01:51:42.661436 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:42.661530 | orchestrator | 2026-03-25 01:51:42.661543 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-25 01:51:52.410636 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-25 01:51:52.410722 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-25 01:51:52.410737 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-25 01:51:52.410751 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-25 01:51:52.410773 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-25 01:51:52.410786 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-25 01:51:52.410799 | orchestrator | 2026-03-25 01:51:52.410813 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-25 01:51:53.496404 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:53.496492 | orchestrator | 2026-03-25 01:51:53.496508 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-25 01:51:53.544031 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:53.544109 | orchestrator | 2026-03-25 01:51:53.544122 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-25 01:51:56.894483 | orchestrator | changed: [testbed-manager] 2026-03-25 01:51:56.894529 | orchestrator | 2026-03-25 01:51:56.894537 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-25 01:51:56.940299 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:51:56.940342 | orchestrator | 2026-03-25 01:51:56.940352 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-25 01:53:48.895804 | orchestrator | changed: [testbed-manager] 2026-03-25 01:53:48.895944 | orchestrator | 2026-03-25 01:53:48.895966 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-25 01:53:50.163361 | orchestrator | ok: [testbed-manager] 2026-03-25 01:53:50.163455 | orchestrator | 2026-03-25 01:53:50.163470 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 01:53:50.163483 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-25 01:53:50.163493 | orchestrator | 2026-03-25 01:53:50.332862 | orchestrator | ok: Runtime: 0:02:37.826333 2026-03-25 01:53:50.343545 | 2026-03-25 01:53:50.343662 | TASK [Reboot manager] 2026-03-25 01:53:51.881266 | orchestrator | ok: Runtime: 0:00:01.009718 2026-03-25 01:53:51.901105 | 2026-03-25 01:53:51.901297 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-25 01:54:08.317002 | orchestrator | ok 2026-03-25 01:54:08.324930 | 2026-03-25 01:54:08.325043 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-25 01:55:08.373103 | orchestrator | ok 2026-03-25 01:55:08.383490 | 2026-03-25 01:55:08.383639 | TASK [Deploy manager + bootstrap nodes] 2026-03-25 01:55:10.881987 | orchestrator | 2026-03-25 01:55:10.882267 | orchestrator | # DEPLOY MANAGER 2026-03-25 01:55:10.882303 | orchestrator | 2026-03-25 01:55:10.882325 | orchestrator | + set -e 2026-03-25 01:55:10.882346 | orchestrator | + echo 2026-03-25 01:55:10.882365 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-25 01:55:10.882390 | orchestrator | + echo 2026-03-25 01:55:10.882449 | orchestrator | + cat /opt/manager-vars.sh 2026-03-25 01:55:10.885576 | orchestrator | export NUMBER_OF_NODES=6 2026-03-25 01:55:10.885647 | orchestrator | 2026-03-25 01:55:10.885668 | orchestrator | export CEPH_VERSION=reef 2026-03-25 01:55:10.885688 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-25 01:55:10.885708 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-25 01:55:10.885746 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-25 01:55:10.885757 | orchestrator | 2026-03-25 01:55:10.885773 | orchestrator | export ARA=false 2026-03-25 01:55:10.885783 | orchestrator | export DEPLOY_MODE=manager 2026-03-25 01:55:10.885799 | orchestrator | export TEMPEST=false 2026-03-25 01:55:10.885810 | orchestrator | export IS_ZUUL=true 2026-03-25 01:55:10.885820 | orchestrator | 2026-03-25 01:55:10.885835 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 01:55:10.885846 | orchestrator | export EXTERNAL_API=false 2026-03-25 01:55:10.885856 | orchestrator | 2026-03-25 01:55:10.885887 | orchestrator | export IMAGE_USER=ubuntu 2026-03-25 01:55:10.885900 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-25 01:55:10.885909 | orchestrator | 2026-03-25 01:55:10.885919 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-25 01:55:10.885939 | orchestrator | 2026-03-25 01:55:10.885949 | orchestrator | + echo 2026-03-25 01:55:10.885960 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 01:55:10.887377 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 01:55:10.887460 | orchestrator | ++ INTERACTIVE=false 2026-03-25 01:55:10.887477 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 01:55:10.887528 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 01:55:10.887643 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 01:55:10.887658 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 01:55:10.887667 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 01:55:10.887675 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 01:55:10.887682 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 01:55:10.887691 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 01:55:10.887718 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 01:55:10.887727 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 01:55:10.887735 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 01:55:10.887743 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 01:55:10.887771 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 01:55:10.887784 | orchestrator | ++ export ARA=false 2026-03-25 01:55:10.887792 | orchestrator | ++ ARA=false 2026-03-25 01:55:10.887801 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 01:55:10.887809 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 01:55:10.887833 | orchestrator | ++ export TEMPEST=false 2026-03-25 01:55:10.887842 | orchestrator | ++ TEMPEST=false 2026-03-25 01:55:10.887849 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 01:55:10.887889 | orchestrator | ++ IS_ZUUL=true 2026-03-25 01:55:10.887903 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 01:55:10.887911 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 01:55:10.887920 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 01:55:10.887928 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 01:55:10.887936 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 01:55:10.887944 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 01:55:10.887954 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 01:55:10.887963 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 01:55:10.887971 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 01:55:10.887997 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 01:55:10.888081 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-25 01:55:10.950129 | orchestrator | + docker version 2026-03-25 01:55:11.050423 | orchestrator | Client: Docker Engine - Community 2026-03-25 01:55:11.050498 | orchestrator | Version: 27.5.1 2026-03-25 01:55:11.050513 | orchestrator | API version: 1.47 2026-03-25 01:55:11.050524 | orchestrator | Go version: go1.22.11 2026-03-25 01:55:11.050534 | orchestrator | Git commit: 9f9e405 2026-03-25 01:55:11.050544 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-25 01:55:11.050556 | orchestrator | OS/Arch: linux/amd64 2026-03-25 01:55:11.050565 | orchestrator | Context: default 2026-03-25 01:55:11.050575 | orchestrator | 2026-03-25 01:55:11.050585 | orchestrator | Server: Docker Engine - Community 2026-03-25 01:55:11.050631 | orchestrator | Engine: 2026-03-25 01:55:11.050724 | orchestrator | Version: 27.5.1 2026-03-25 01:55:11.050739 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-25 01:55:11.050779 | orchestrator | Go version: go1.22.11 2026-03-25 01:55:11.050807 | orchestrator | Git commit: 4c9b3b0 2026-03-25 01:55:11.050818 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-25 01:55:11.050828 | orchestrator | OS/Arch: linux/amd64 2026-03-25 01:55:11.050837 | orchestrator | Experimental: false 2026-03-25 01:55:11.050848 | orchestrator | containerd: 2026-03-25 01:55:11.050917 | orchestrator | Version: v2.2.2 2026-03-25 01:55:11.050930 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-25 01:55:11.050941 | orchestrator | runc: 2026-03-25 01:55:11.050951 | orchestrator | Version: 1.3.4 2026-03-25 01:55:11.051078 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-25 01:55:11.051094 | orchestrator | docker-init: 2026-03-25 01:55:11.051106 | orchestrator | Version: 0.19.0 2026-03-25 01:55:11.051118 | orchestrator | GitCommit: de40ad0 2026-03-25 01:55:11.054291 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-25 01:55:11.062457 | orchestrator | + set -e 2026-03-25 01:55:11.062547 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 01:55:11.062565 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 01:55:11.062578 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 01:55:11.062590 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 01:55:11.062602 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 01:55:11.062615 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 01:55:11.062628 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 01:55:11.062640 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 01:55:11.062653 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 01:55:11.062665 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 01:55:11.062678 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 01:55:11.062690 | orchestrator | ++ export ARA=false 2026-03-25 01:55:11.062703 | orchestrator | ++ ARA=false 2026-03-25 01:55:11.062715 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 01:55:11.062727 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 01:55:11.062739 | orchestrator | ++ export TEMPEST=false 2026-03-25 01:55:11.062752 | orchestrator | ++ TEMPEST=false 2026-03-25 01:55:11.062764 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 01:55:11.062776 | orchestrator | ++ IS_ZUUL=true 2026-03-25 01:55:11.062788 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 01:55:11.062801 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 01:55:11.062813 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 01:55:11.062825 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 01:55:11.062837 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 01:55:11.062849 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 01:55:11.062896 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 01:55:11.062909 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 01:55:11.062922 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 01:55:11.062934 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 01:55:11.062946 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 01:55:11.062959 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 01:55:11.062971 | orchestrator | ++ INTERACTIVE=false 2026-03-25 01:55:11.062983 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 01:55:11.062999 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 01:55:11.063020 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-25 01:55:11.063032 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-25 01:55:11.069028 | orchestrator | + set -e 2026-03-25 01:55:11.069084 | orchestrator | + VERSION=9.5.0 2026-03-25 01:55:11.069100 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-25 01:55:11.076880 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-25 01:55:11.076931 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-25 01:55:11.081442 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-25 01:55:11.085889 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-25 01:55:11.095113 | orchestrator | /opt/configuration ~ 2026-03-25 01:55:11.095152 | orchestrator | + set -e 2026-03-25 01:55:11.095157 | orchestrator | + pushd /opt/configuration 2026-03-25 01:55:11.095161 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-25 01:55:11.096558 | orchestrator | + source /opt/venv/bin/activate 2026-03-25 01:55:11.098805 | orchestrator | ++ deactivate nondestructive 2026-03-25 01:55:11.098827 | orchestrator | ++ '[' -n '' ']' 2026-03-25 01:55:11.098834 | orchestrator | ++ '[' -n '' ']' 2026-03-25 01:55:11.098851 | orchestrator | ++ hash -r 2026-03-25 01:55:11.098866 | orchestrator | ++ '[' -n '' ']' 2026-03-25 01:55:11.098871 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-25 01:55:11.098875 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-25 01:55:11.098879 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-25 01:55:11.098884 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-25 01:55:11.098889 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-25 01:55:11.098893 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-25 01:55:11.098897 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-25 01:55:11.098902 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 01:55:11.098907 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 01:55:11.098911 | orchestrator | ++ export PATH 2026-03-25 01:55:11.098916 | orchestrator | ++ '[' -n '' ']' 2026-03-25 01:55:11.098920 | orchestrator | ++ '[' -z '' ']' 2026-03-25 01:55:11.098924 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-25 01:55:11.098928 | orchestrator | ++ PS1='(venv) ' 2026-03-25 01:55:11.098932 | orchestrator | ++ export PS1 2026-03-25 01:55:11.098937 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-25 01:55:11.098941 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-25 01:55:11.098946 | orchestrator | ++ hash -r 2026-03-25 01:55:11.098950 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-25 01:55:12.264652 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-25 01:55:12.265910 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-25 01:55:12.267363 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-25 01:55:12.268625 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-25 01:55:12.270095 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-25 01:55:12.285801 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-25 01:55:12.288141 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-25 01:55:12.289532 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-25 01:55:12.291564 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-25 01:55:12.328313 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-25 01:55:12.330530 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-25 01:55:12.333157 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-25 01:55:12.334538 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-25 01:55:12.340355 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-25 01:55:12.544107 | orchestrator | ++ which gilt 2026-03-25 01:55:12.546596 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-25 01:55:12.546648 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-25 01:55:12.812902 | orchestrator | osism.cfg-generics: 2026-03-25 01:55:12.968803 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-25 01:55:12.968922 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-25 01:55:12.968945 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-25 01:55:12.968954 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-25 01:55:14.077526 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-25 01:55:14.090103 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-25 01:55:14.464750 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-25 01:55:14.527357 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-25 01:55:14.527447 | orchestrator | + deactivate 2026-03-25 01:55:14.527459 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-25 01:55:14.527469 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 01:55:14.527477 | orchestrator | + export PATH 2026-03-25 01:55:14.527485 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-25 01:55:14.527493 | orchestrator | + '[' -n '' ']' 2026-03-25 01:55:14.527502 | orchestrator | + hash -r 2026-03-25 01:55:14.527510 | orchestrator | + '[' -n '' ']' 2026-03-25 01:55:14.527517 | orchestrator | + unset VIRTUAL_ENV 2026-03-25 01:55:14.527524 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-25 01:55:14.527531 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-25 01:55:14.527548 | orchestrator | + unset -f deactivate 2026-03-25 01:55:14.527556 | orchestrator | + popd 2026-03-25 01:55:14.527563 | orchestrator | ~ 2026-03-25 01:55:14.529190 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-25 01:55:14.529226 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-25 01:55:14.529813 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-25 01:55:14.597892 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 01:55:14.598009 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-25 01:55:14.599135 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-25 01:55:14.665982 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-25 01:55:14.666145 | orchestrator | ++ semver 2024.2 2025.1 2026-03-25 01:55:14.733320 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-25 01:55:14.733451 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-25 01:55:14.832802 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-25 01:55:14.832992 | orchestrator | + source /opt/venv/bin/activate 2026-03-25 01:55:14.833010 | orchestrator | ++ deactivate nondestructive 2026-03-25 01:55:14.833019 | orchestrator | ++ '[' -n '' ']' 2026-03-25 01:55:14.833027 | orchestrator | ++ '[' -n '' ']' 2026-03-25 01:55:14.833046 | orchestrator | ++ hash -r 2026-03-25 01:55:14.833092 | orchestrator | ++ '[' -n '' ']' 2026-03-25 01:55:14.833105 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-25 01:55:14.833131 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-25 01:55:14.833144 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-25 01:55:14.833157 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-25 01:55:14.833170 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-25 01:55:14.833183 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-25 01:55:14.833207 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-25 01:55:14.833220 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 01:55:14.833264 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 01:55:14.833277 | orchestrator | ++ export PATH 2026-03-25 01:55:14.833285 | orchestrator | ++ '[' -n '' ']' 2026-03-25 01:55:14.833293 | orchestrator | ++ '[' -z '' ']' 2026-03-25 01:55:14.833383 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-25 01:55:14.833399 | orchestrator | ++ PS1='(venv) ' 2026-03-25 01:55:14.833412 | orchestrator | ++ export PS1 2026-03-25 01:55:14.833426 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-25 01:55:14.833439 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-25 01:55:14.833453 | orchestrator | ++ hash -r 2026-03-25 01:55:14.833471 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-25 01:55:16.166405 | orchestrator | 2026-03-25 01:55:16.166522 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-25 01:55:16.166540 | orchestrator | 2026-03-25 01:55:16.166553 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-25 01:55:16.780965 | orchestrator | ok: [testbed-manager] 2026-03-25 01:55:16.781098 | orchestrator | 2026-03-25 01:55:16.781128 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-25 01:55:17.835001 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:17.835156 | orchestrator | 2026-03-25 01:55:17.835174 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-25 01:55:17.835214 | orchestrator | 2026-03-25 01:55:17.835226 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 01:55:20.324664 | orchestrator | ok: [testbed-manager] 2026-03-25 01:55:20.324796 | orchestrator | 2026-03-25 01:55:20.324815 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-25 01:55:20.388204 | orchestrator | ok: [testbed-manager] 2026-03-25 01:55:20.388282 | orchestrator | 2026-03-25 01:55:20.388291 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-25 01:55:20.901835 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:20.902005 | orchestrator | 2026-03-25 01:55:20.902083 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-25 01:55:20.949810 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:55:20.949945 | orchestrator | 2026-03-25 01:55:20.949964 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-25 01:55:21.303390 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:21.303477 | orchestrator | 2026-03-25 01:55:21.303489 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-25 01:55:21.642825 | orchestrator | ok: [testbed-manager] 2026-03-25 01:55:21.642973 | orchestrator | 2026-03-25 01:55:21.642989 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-25 01:55:21.752455 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:55:21.752579 | orchestrator | 2026-03-25 01:55:21.752606 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-25 01:55:21.752620 | orchestrator | 2026-03-25 01:55:21.752632 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 01:55:23.609808 | orchestrator | ok: [testbed-manager] 2026-03-25 01:55:23.609995 | orchestrator | 2026-03-25 01:55:23.610095 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-25 01:55:23.725491 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-25 01:55:23.725591 | orchestrator | 2026-03-25 01:55:23.725607 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-25 01:55:23.792730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-25 01:55:23.792801 | orchestrator | 2026-03-25 01:55:23.792809 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-25 01:55:24.972577 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-25 01:55:24.972682 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-25 01:55:24.972697 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-25 01:55:24.972711 | orchestrator | 2026-03-25 01:55:24.972726 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-25 01:55:26.886321 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-25 01:55:26.886412 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-25 01:55:26.886424 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-25 01:55:26.886433 | orchestrator | 2026-03-25 01:55:26.886441 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-25 01:55:27.604834 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-25 01:55:27.605042 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:27.605067 | orchestrator | 2026-03-25 01:55:27.605189 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-25 01:55:28.293369 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-25 01:55:28.293467 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:28.293481 | orchestrator | 2026-03-25 01:55:28.293492 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-25 01:55:28.357053 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:55:28.357179 | orchestrator | 2026-03-25 01:55:28.357204 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-25 01:55:28.737489 | orchestrator | ok: [testbed-manager] 2026-03-25 01:55:28.737590 | orchestrator | 2026-03-25 01:55:28.737607 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-25 01:55:28.813621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-25 01:55:28.813726 | orchestrator | 2026-03-25 01:55:28.813740 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-25 01:55:29.994464 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:29.994564 | orchestrator | 2026-03-25 01:55:29.994580 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-25 01:55:30.852950 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:30.853028 | orchestrator | 2026-03-25 01:55:30.853039 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-25 01:55:42.330237 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:42.330357 | orchestrator | 2026-03-25 01:55:42.330377 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-25 01:55:42.393356 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:55:42.393435 | orchestrator | 2026-03-25 01:55:42.393529 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-25 01:55:42.393542 | orchestrator | 2026-03-25 01:55:42.393551 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 01:55:44.414535 | orchestrator | ok: [testbed-manager] 2026-03-25 01:55:44.414638 | orchestrator | 2026-03-25 01:55:44.414655 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-25 01:55:44.557481 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-25 01:55:44.557581 | orchestrator | 2026-03-25 01:55:44.557602 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-25 01:55:44.626391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 01:55:44.626491 | orchestrator | 2026-03-25 01:55:44.626507 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-25 01:55:47.345691 | orchestrator | ok: [testbed-manager] 2026-03-25 01:55:47.345822 | orchestrator | 2026-03-25 01:55:47.345842 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-25 01:55:47.403517 | orchestrator | ok: [testbed-manager] 2026-03-25 01:55:47.403606 | orchestrator | 2026-03-25 01:55:47.403617 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-25 01:55:47.571438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-25 01:55:47.571548 | orchestrator | 2026-03-25 01:55:47.571567 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-25 01:55:50.629830 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-25 01:55:50.629966 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-25 01:55:50.629978 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-25 01:55:50.629988 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-25 01:55:50.629996 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-25 01:55:50.630006 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-25 01:55:50.630076 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-25 01:55:50.630085 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-25 01:55:50.630093 | orchestrator | 2026-03-25 01:55:50.630102 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-25 01:55:51.362754 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:51.362848 | orchestrator | 2026-03-25 01:55:51.362925 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-25 01:55:52.080392 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:52.080527 | orchestrator | 2026-03-25 01:55:52.080552 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-25 01:55:52.166317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-25 01:55:52.166433 | orchestrator | 2026-03-25 01:55:52.166450 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-25 01:55:53.519784 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-25 01:55:53.519937 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-25 01:55:53.519954 | orchestrator | 2026-03-25 01:55:53.519965 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-25 01:55:54.184508 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:54.184612 | orchestrator | 2026-03-25 01:55:54.184629 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-25 01:55:54.240834 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:55:54.240987 | orchestrator | 2026-03-25 01:55:54.241004 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-25 01:55:54.332937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-25 01:55:54.333052 | orchestrator | 2026-03-25 01:55:54.333079 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-25 01:55:54.989746 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:54.989853 | orchestrator | 2026-03-25 01:55:54.989922 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-25 01:55:55.061548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-25 01:55:55.061649 | orchestrator | 2026-03-25 01:55:55.061666 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-25 01:55:56.487180 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-25 01:55:56.487273 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-25 01:55:56.487283 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:56.487293 | orchestrator | 2026-03-25 01:55:56.487302 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-25 01:55:57.185630 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:57.185734 | orchestrator | 2026-03-25 01:55:57.185751 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-25 01:55:57.249736 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:55:57.249834 | orchestrator | 2026-03-25 01:55:57.249853 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-25 01:55:57.366312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-25 01:55:57.366383 | orchestrator | 2026-03-25 01:55:57.366389 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-25 01:55:57.958443 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:57.958547 | orchestrator | 2026-03-25 01:55:57.958558 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-25 01:55:58.391160 | orchestrator | changed: [testbed-manager] 2026-03-25 01:55:58.391283 | orchestrator | 2026-03-25 01:55:58.391300 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-25 01:55:59.727990 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-25 01:55:59.728111 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-25 01:55:59.728128 | orchestrator | 2026-03-25 01:55:59.728140 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-25 01:56:00.438570 | orchestrator | changed: [testbed-manager] 2026-03-25 01:56:00.438660 | orchestrator | 2026-03-25 01:56:00.438672 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-25 01:56:00.850231 | orchestrator | ok: [testbed-manager] 2026-03-25 01:56:00.850337 | orchestrator | 2026-03-25 01:56:00.850354 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-25 01:56:01.229292 | orchestrator | changed: [testbed-manager] 2026-03-25 01:56:01.229391 | orchestrator | 2026-03-25 01:56:01.229405 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-25 01:56:01.284596 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:56:01.284697 | orchestrator | 2026-03-25 01:56:01.284713 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-25 01:56:01.367998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-25 01:56:01.368139 | orchestrator | 2026-03-25 01:56:01.368157 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-25 01:56:01.423508 | orchestrator | ok: [testbed-manager] 2026-03-25 01:56:01.423594 | orchestrator | 2026-03-25 01:56:01.423606 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-25 01:56:03.582817 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-25 01:56:03.582988 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-25 01:56:03.583018 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-25 01:56:03.583037 | orchestrator | 2026-03-25 01:56:03.583059 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-25 01:56:04.328513 | orchestrator | changed: [testbed-manager] 2026-03-25 01:56:04.328614 | orchestrator | 2026-03-25 01:56:04.328628 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-25 01:56:05.094307 | orchestrator | changed: [testbed-manager] 2026-03-25 01:56:05.094418 | orchestrator | 2026-03-25 01:56:05.094436 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-25 01:56:05.820527 | orchestrator | changed: [testbed-manager] 2026-03-25 01:56:05.820627 | orchestrator | 2026-03-25 01:56:05.820644 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-25 01:56:05.905050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-25 01:56:05.905156 | orchestrator | 2026-03-25 01:56:05.905174 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-25 01:56:05.960497 | orchestrator | ok: [testbed-manager] 2026-03-25 01:56:05.960616 | orchestrator | 2026-03-25 01:56:05.960636 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-25 01:56:06.752601 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-25 01:56:06.752736 | orchestrator | 2026-03-25 01:56:06.752762 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-25 01:56:06.856136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-25 01:56:06.856271 | orchestrator | 2026-03-25 01:56:06.856301 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-25 01:56:07.611601 | orchestrator | changed: [testbed-manager] 2026-03-25 01:56:07.611697 | orchestrator | 2026-03-25 01:56:07.611709 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-25 01:56:08.304216 | orchestrator | ok: [testbed-manager] 2026-03-25 01:56:08.304336 | orchestrator | 2026-03-25 01:56:08.304359 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-25 01:56:08.367025 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:56:08.367126 | orchestrator | 2026-03-25 01:56:08.367142 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-25 01:56:08.428063 | orchestrator | ok: [testbed-manager] 2026-03-25 01:56:08.428157 | orchestrator | 2026-03-25 01:56:08.428170 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-25 01:56:09.324377 | orchestrator | changed: [testbed-manager] 2026-03-25 01:56:09.324509 | orchestrator | 2026-03-25 01:56:09.324528 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-25 01:57:26.473290 | orchestrator | changed: [testbed-manager] 2026-03-25 01:57:26.473406 | orchestrator | 2026-03-25 01:57:26.473421 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-25 01:57:27.523830 | orchestrator | ok: [testbed-manager] 2026-03-25 01:57:27.524006 | orchestrator | 2026-03-25 01:57:27.524025 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-25 01:57:27.563698 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:57:27.563796 | orchestrator | 2026-03-25 01:57:27.563803 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-25 01:57:30.806989 | orchestrator | changed: [testbed-manager] 2026-03-25 01:57:30.807118 | orchestrator | 2026-03-25 01:57:30.807131 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-25 01:57:30.935122 | orchestrator | ok: [testbed-manager] 2026-03-25 01:57:30.935281 | orchestrator | 2026-03-25 01:57:30.935302 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-25 01:57:30.935315 | orchestrator | 2026-03-25 01:57:30.935327 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-25 01:57:31.010534 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:57:31.010667 | orchestrator | 2026-03-25 01:57:31.010687 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-25 01:58:31.071002 | orchestrator | Pausing for 60 seconds 2026-03-25 01:58:31.071125 | orchestrator | changed: [testbed-manager] 2026-03-25 01:58:31.071142 | orchestrator | 2026-03-25 01:58:31.071156 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-25 01:58:34.256541 | orchestrator | changed: [testbed-manager] 2026-03-25 01:58:34.256653 | orchestrator | 2026-03-25 01:58:34.256682 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-25 01:59:36.434630 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-25 01:59:36.434745 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-25 01:59:36.434789 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-25 01:59:36.434802 | orchestrator | changed: [testbed-manager] 2026-03-25 01:59:36.434813 | orchestrator | 2026-03-25 01:59:36.434824 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-25 01:59:48.035705 | orchestrator | changed: [testbed-manager] 2026-03-25 01:59:48.035842 | orchestrator | 2026-03-25 01:59:48.035870 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-25 01:59:48.117034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-25 01:59:48.117152 | orchestrator | 2026-03-25 01:59:48.117180 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-25 01:59:48.117198 | orchestrator | 2026-03-25 01:59:48.117284 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-25 01:59:48.181998 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:59:48.182146 | orchestrator | 2026-03-25 01:59:48.182168 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-25 01:59:48.264395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-25 01:59:48.264506 | orchestrator | 2026-03-25 01:59:48.264523 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-25 01:59:49.053455 | orchestrator | changed: [testbed-manager] 2026-03-25 01:59:49.053551 | orchestrator | 2026-03-25 01:59:49.053564 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-25 01:59:52.700170 | orchestrator | ok: [testbed-manager] 2026-03-25 01:59:52.700293 | orchestrator | 2026-03-25 01:59:52.700322 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-25 01:59:52.786273 | orchestrator | ok: [testbed-manager] => { 2026-03-25 01:59:52.786387 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-25 01:59:52.786413 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-25 01:59:52.786429 | orchestrator | "Checking running containers against expected versions...", 2026-03-25 01:59:52.786446 | orchestrator | "", 2026-03-25 01:59:52.786461 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-25 01:59:52.786477 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-25 01:59:52.786495 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.786513 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-25 01:59:52.786530 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.786548 | orchestrator | "", 2026-03-25 01:59:52.786564 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-25 01:59:52.786614 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-25 01:59:52.786626 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.786636 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-25 01:59:52.786646 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.786656 | orchestrator | "", 2026-03-25 01:59:52.786666 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-25 01:59:52.786676 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-25 01:59:52.786686 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.786696 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-25 01:59:52.786705 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.786715 | orchestrator | "", 2026-03-25 01:59:52.786725 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-25 01:59:52.786735 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-25 01:59:52.786744 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.786754 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-25 01:59:52.786764 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.786773 | orchestrator | "", 2026-03-25 01:59:52.786785 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-25 01:59:52.786795 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-25 01:59:52.786805 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.786816 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-25 01:59:52.786827 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.786838 | orchestrator | "", 2026-03-25 01:59:52.786850 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-25 01:59:52.786860 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.786871 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.786883 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.786894 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.786905 | orchestrator | "", 2026-03-25 01:59:52.786916 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-25 01:59:52.786928 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-25 01:59:52.786939 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.786950 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-25 01:59:52.787000 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.787012 | orchestrator | "", 2026-03-25 01:59:52.787023 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-25 01:59:52.787034 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-25 01:59:52.787045 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.787056 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-25 01:59:52.787067 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.787078 | orchestrator | "", 2026-03-25 01:59:52.787089 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-25 01:59:52.787100 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-25 01:59:52.787110 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.787122 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-25 01:59:52.787133 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.787144 | orchestrator | "", 2026-03-25 01:59:52.787155 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-25 01:59:52.787166 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-25 01:59:52.787177 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.787186 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-25 01:59:52.787196 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.787206 | orchestrator | "", 2026-03-25 01:59:52.787215 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-25 01:59:52.787232 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787242 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.787252 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787262 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.787272 | orchestrator | "", 2026-03-25 01:59:52.787282 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-25 01:59:52.787291 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787301 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.787310 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787321 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.787331 | orchestrator | "", 2026-03-25 01:59:52.787341 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-25 01:59:52.787350 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787360 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.787370 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787379 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.787389 | orchestrator | "", 2026-03-25 01:59:52.787399 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-25 01:59:52.787408 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787418 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.787428 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787457 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.787468 | orchestrator | "", 2026-03-25 01:59:52.787478 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-25 01:59:52.787487 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787506 | orchestrator | " Enabled: true", 2026-03-25 01:59:52.787516 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-25 01:59:52.787525 | orchestrator | " Status: ✅ MATCH", 2026-03-25 01:59:52.787535 | orchestrator | "", 2026-03-25 01:59:52.787545 | orchestrator | "=== Summary ===", 2026-03-25 01:59:52.787555 | orchestrator | "Errors (version mismatches): 0", 2026-03-25 01:59:52.787564 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-25 01:59:52.787574 | orchestrator | "", 2026-03-25 01:59:52.787584 | orchestrator | "✅ All running containers match expected versions!" 2026-03-25 01:59:52.787594 | orchestrator | ] 2026-03-25 01:59:52.787604 | orchestrator | } 2026-03-25 01:59:52.787614 | orchestrator | 2026-03-25 01:59:52.787624 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-25 01:59:52.852397 | orchestrator | skipping: [testbed-manager] 2026-03-25 01:59:52.852492 | orchestrator | 2026-03-25 01:59:52.852508 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 01:59:52.852521 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-25 01:59:52.852534 | orchestrator | 2026-03-25 01:59:52.964378 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-25 01:59:52.964475 | orchestrator | + deactivate 2026-03-25 01:59:52.964490 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-25 01:59:52.964503 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 01:59:52.964515 | orchestrator | + export PATH 2026-03-25 01:59:52.964526 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-25 01:59:52.964537 | orchestrator | + '[' -n '' ']' 2026-03-25 01:59:52.964548 | orchestrator | + hash -r 2026-03-25 01:59:52.964559 | orchestrator | + '[' -n '' ']' 2026-03-25 01:59:52.964570 | orchestrator | + unset VIRTUAL_ENV 2026-03-25 01:59:52.964581 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-25 01:59:52.964592 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-25 01:59:52.964603 | orchestrator | + unset -f deactivate 2026-03-25 01:59:52.964614 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-25 01:59:52.971190 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-25 01:59:52.971244 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-25 01:59:52.971283 | orchestrator | + local max_attempts=60 2026-03-25 01:59:52.971295 | orchestrator | + local name=ceph-ansible 2026-03-25 01:59:52.971306 | orchestrator | + local attempt_num=1 2026-03-25 01:59:52.972043 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 01:59:53.005697 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-25 01:59:53.005791 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-25 01:59:53.005805 | orchestrator | + local max_attempts=60 2026-03-25 01:59:53.005814 | orchestrator | + local name=kolla-ansible 2026-03-25 01:59:53.005821 | orchestrator | + local attempt_num=1 2026-03-25 01:59:53.006111 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-25 01:59:53.040889 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-25 01:59:53.041007 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-25 01:59:53.041022 | orchestrator | + local max_attempts=60 2026-03-25 01:59:53.041034 | orchestrator | + local name=osism-ansible 2026-03-25 01:59:53.041045 | orchestrator | + local attempt_num=1 2026-03-25 01:59:53.041415 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-25 01:59:53.082755 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-25 01:59:53.082847 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-25 01:59:53.082861 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-25 01:59:53.845810 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-25 01:59:54.062012 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-25 01:59:54.062109 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-25 01:59:54.062117 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-25 01:59:54.062122 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-25 01:59:54.062133 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-25 01:59:54.062153 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-25 01:59:54.062157 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-25 01:59:54.062171 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-25 01:59:54.062176 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-25 01:59:54.062180 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-25 01:59:54.062184 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-25 01:59:54.062188 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-25 01:59:54.062192 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-25 01:59:54.062211 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-25 01:59:54.062215 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-25 01:59:54.062220 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-25 01:59:54.069897 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-25 01:59:54.118887 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 01:59:54.119034 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-25 01:59:54.123116 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-25 02:00:06.483414 | orchestrator | 2026-03-25 02:00:06 | INFO  | Task 2a0b0ff3-33b2-42e2-91e6-511614c70f0c (resolvconf) was prepared for execution. 2026-03-25 02:00:06.483523 | orchestrator | 2026-03-25 02:00:06 | INFO  | It takes a moment until task 2a0b0ff3-33b2-42e2-91e6-511614c70f0c (resolvconf) has been started and output is visible here. 2026-03-25 02:00:21.725580 | orchestrator | 2026-03-25 02:00:21.725661 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-25 02:00:21.725668 | orchestrator | 2026-03-25 02:00:21.725673 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 02:00:21.725678 | orchestrator | Wednesday 25 March 2026 02:00:11 +0000 (0:00:00.147) 0:00:00.147 ******* 2026-03-25 02:00:21.725682 | orchestrator | ok: [testbed-manager] 2026-03-25 02:00:21.725687 | orchestrator | 2026-03-25 02:00:21.725692 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-25 02:00:21.725697 | orchestrator | Wednesday 25 March 2026 02:00:15 +0000 (0:00:04.117) 0:00:04.265 ******* 2026-03-25 02:00:21.725701 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:00:21.725706 | orchestrator | 2026-03-25 02:00:21.725710 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-25 02:00:21.725714 | orchestrator | Wednesday 25 March 2026 02:00:15 +0000 (0:00:00.063) 0:00:04.328 ******* 2026-03-25 02:00:21.725718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-25 02:00:21.725723 | orchestrator | 2026-03-25 02:00:21.725727 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-25 02:00:21.725730 | orchestrator | Wednesday 25 March 2026 02:00:15 +0000 (0:00:00.085) 0:00:04.414 ******* 2026-03-25 02:00:21.725747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 02:00:21.725751 | orchestrator | 2026-03-25 02:00:21.725755 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-25 02:00:21.725759 | orchestrator | Wednesday 25 March 2026 02:00:15 +0000 (0:00:00.087) 0:00:04.501 ******* 2026-03-25 02:00:21.725763 | orchestrator | ok: [testbed-manager] 2026-03-25 02:00:21.725766 | orchestrator | 2026-03-25 02:00:21.725770 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-25 02:00:21.725774 | orchestrator | Wednesday 25 March 2026 02:00:16 +0000 (0:00:01.227) 0:00:05.728 ******* 2026-03-25 02:00:21.725778 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:00:21.725782 | orchestrator | 2026-03-25 02:00:21.725786 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-25 02:00:21.725790 | orchestrator | Wednesday 25 March 2026 02:00:16 +0000 (0:00:00.077) 0:00:05.806 ******* 2026-03-25 02:00:21.725806 | orchestrator | ok: [testbed-manager] 2026-03-25 02:00:21.725810 | orchestrator | 2026-03-25 02:00:21.725814 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-25 02:00:21.725818 | orchestrator | Wednesday 25 March 2026 02:00:17 +0000 (0:00:00.550) 0:00:06.357 ******* 2026-03-25 02:00:21.725822 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:00:21.725825 | orchestrator | 2026-03-25 02:00:21.725829 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-25 02:00:21.725834 | orchestrator | Wednesday 25 March 2026 02:00:17 +0000 (0:00:00.089) 0:00:06.446 ******* 2026-03-25 02:00:21.725838 | orchestrator | changed: [testbed-manager] 2026-03-25 02:00:21.725842 | orchestrator | 2026-03-25 02:00:21.725846 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-25 02:00:21.725849 | orchestrator | Wednesday 25 March 2026 02:00:17 +0000 (0:00:00.561) 0:00:07.007 ******* 2026-03-25 02:00:21.725853 | orchestrator | changed: [testbed-manager] 2026-03-25 02:00:21.725857 | orchestrator | 2026-03-25 02:00:21.725861 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-25 02:00:21.725865 | orchestrator | Wednesday 25 March 2026 02:00:19 +0000 (0:00:01.154) 0:00:08.162 ******* 2026-03-25 02:00:21.725869 | orchestrator | ok: [testbed-manager] 2026-03-25 02:00:21.725875 | orchestrator | 2026-03-25 02:00:21.725881 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-25 02:00:21.725887 | orchestrator | Wednesday 25 March 2026 02:00:20 +0000 (0:00:01.024) 0:00:09.186 ******* 2026-03-25 02:00:21.725892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-25 02:00:21.725898 | orchestrator | 2026-03-25 02:00:21.725904 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-25 02:00:21.725911 | orchestrator | Wednesday 25 March 2026 02:00:20 +0000 (0:00:00.099) 0:00:09.285 ******* 2026-03-25 02:00:21.725916 | orchestrator | changed: [testbed-manager] 2026-03-25 02:00:21.725922 | orchestrator | 2026-03-25 02:00:21.725928 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:00:21.725935 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 02:00:21.725942 | orchestrator | 2026-03-25 02:00:21.725948 | orchestrator | 2026-03-25 02:00:21.725954 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:00:21.725960 | orchestrator | Wednesday 25 March 2026 02:00:21 +0000 (0:00:01.239) 0:00:10.524 ******* 2026-03-25 02:00:21.725986 | orchestrator | =============================================================================== 2026-03-25 02:00:21.725992 | orchestrator | Gathering Facts --------------------------------------------------------- 4.12s 2026-03-25 02:00:21.725999 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.24s 2026-03-25 02:00:21.726005 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.23s 2026-03-25 02:00:21.726011 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.15s 2026-03-25 02:00:21.726087 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2026-03-25 02:00:21.726094 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-03-25 02:00:21.726115 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2026-03-25 02:00:21.726121 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-03-25 02:00:21.726127 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-25 02:00:21.726133 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-03-25 02:00:21.726139 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-25 02:00:21.726145 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.08s 2026-03-25 02:00:21.726159 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-25 02:00:22.097077 | orchestrator | + osism apply sshconfig 2026-03-25 02:00:34.243763 | orchestrator | 2026-03-25 02:00:34 | INFO  | Task cbacbcef-807a-46c8-9832-e1779e9c5660 (sshconfig) was prepared for execution. 2026-03-25 02:00:34.243857 | orchestrator | 2026-03-25 02:00:34 | INFO  | It takes a moment until task cbacbcef-807a-46c8-9832-e1779e9c5660 (sshconfig) has been started and output is visible here. 2026-03-25 02:00:46.913786 | orchestrator | 2026-03-25 02:00:46.913936 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-25 02:00:46.913952 | orchestrator | 2026-03-25 02:00:46.914096 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-25 02:00:46.914111 | orchestrator | Wednesday 25 March 2026 02:00:38 +0000 (0:00:00.171) 0:00:00.171 ******* 2026-03-25 02:00:46.914122 | orchestrator | ok: [testbed-manager] 2026-03-25 02:00:46.914133 | orchestrator | 2026-03-25 02:00:46.914144 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-25 02:00:46.914154 | orchestrator | Wednesday 25 March 2026 02:00:39 +0000 (0:00:00.563) 0:00:00.735 ******* 2026-03-25 02:00:46.914164 | orchestrator | changed: [testbed-manager] 2026-03-25 02:00:46.914176 | orchestrator | 2026-03-25 02:00:46.914186 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-25 02:00:46.914196 | orchestrator | Wednesday 25 March 2026 02:00:39 +0000 (0:00:00.556) 0:00:01.291 ******* 2026-03-25 02:00:46.914206 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-25 02:00:46.914219 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-25 02:00:46.914236 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-25 02:00:46.914253 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-25 02:00:46.914292 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-25 02:00:46.914310 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-25 02:00:46.914325 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-25 02:00:46.914341 | orchestrator | 2026-03-25 02:00:46.914357 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-25 02:00:46.914373 | orchestrator | Wednesday 25 March 2026 02:00:45 +0000 (0:00:05.977) 0:00:07.268 ******* 2026-03-25 02:00:46.914389 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:00:46.914405 | orchestrator | 2026-03-25 02:00:46.914421 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-25 02:00:46.914438 | orchestrator | Wednesday 25 March 2026 02:00:46 +0000 (0:00:00.098) 0:00:07.367 ******* 2026-03-25 02:00:46.914455 | orchestrator | changed: [testbed-manager] 2026-03-25 02:00:46.914472 | orchestrator | 2026-03-25 02:00:46.914489 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:00:46.914506 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:00:46.914519 | orchestrator | 2026-03-25 02:00:46.914530 | orchestrator | 2026-03-25 02:00:46.914541 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:00:46.914553 | orchestrator | Wednesday 25 March 2026 02:00:46 +0000 (0:00:00.621) 0:00:07.988 ******* 2026-03-25 02:00:46.914565 | orchestrator | =============================================================================== 2026-03-25 02:00:46.914577 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.98s 2026-03-25 02:00:46.914589 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.62s 2026-03-25 02:00:46.914600 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2026-03-25 02:00:46.914610 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.56s 2026-03-25 02:00:46.914620 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-03-25 02:00:47.262333 | orchestrator | + osism apply known-hosts 2026-03-25 02:00:59.518785 | orchestrator | 2026-03-25 02:00:59 | INFO  | Task 2ad29708-2722-4f43-a237-d4cbf2145030 (known-hosts) was prepared for execution. 2026-03-25 02:00:59.518879 | orchestrator | 2026-03-25 02:00:59 | INFO  | It takes a moment until task 2ad29708-2722-4f43-a237-d4cbf2145030 (known-hosts) has been started and output is visible here. 2026-03-25 02:01:17.659798 | orchestrator | 2026-03-25 02:01:17.659914 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-25 02:01:17.659931 | orchestrator | 2026-03-25 02:01:17.659944 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-25 02:01:17.659956 | orchestrator | Wednesday 25 March 2026 02:01:04 +0000 (0:00:00.181) 0:00:00.181 ******* 2026-03-25 02:01:17.659968 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-25 02:01:17.660015 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-25 02:01:17.660028 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-25 02:01:17.660039 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-25 02:01:17.660050 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-25 02:01:17.660061 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-25 02:01:17.660072 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-25 02:01:17.660083 | orchestrator | 2026-03-25 02:01:17.660095 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-25 02:01:17.660107 | orchestrator | Wednesday 25 March 2026 02:01:10 +0000 (0:00:06.307) 0:00:06.489 ******* 2026-03-25 02:01:17.660120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-25 02:01:17.660133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-25 02:01:17.660145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-25 02:01:17.660156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-25 02:01:17.660167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-25 02:01:17.660188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-25 02:01:17.660200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-25 02:01:17.660211 | orchestrator | 2026-03-25 02:01:17.660222 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:17.660233 | orchestrator | Wednesday 25 March 2026 02:01:10 +0000 (0:00:00.174) 0:00:06.664 ******* 2026-03-25 02:01:17.660245 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDKVfdxdGiUuBQS37BzZqY4NFIq/mJf3NhsaGos+/Msg) 2026-03-25 02:01:17.660266 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChcwLanoeSAxApRU3Zhc2JymHbwEdlq2oZJ35xZlzrABVrvfXZsqOCVQ2Z+ZrM6PznjBaHWFfxdqTqvxmbEbx7+4LXxnvvfe92A4TvZajjWbqwLYkxiGcIA0PmkXkPnA/aaG5gGpboFFIowFN+vja948q/IDfeyLJtfmZHr4ecDsbWoyNirLOQ3IKWmS68rXqanaytwXDyd2vyYTlfpDgnYOgT1E+2t9dh6lpP2rs2/JkAj1Qy+k/gj7Y3tTAmywO9Q5Pv4B+feFUYPt40v+s4Q7sUeI7Dtr38jYZWdzh8I2VBWSqpGELTYdtsAtZOaUBhn8egBzzm2vqDXGjjCG9mpJynt5Wv1A01umWO4QBA+QPUqimcZ/qFl7m7l+tlQio61whWl9QepbHbeDC59VW+AzaIvdwLDavPHPzAMN/5DdhwHx3vP+h47kS/Ndt7i6YzL6EsagM0xZ3rXyU5uvKr/GkSb5HsBEeHcgoZO0Dn/XtCCXRxYh8aznyFfpdkT6s=) 2026-03-25 02:01:17.660300 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAtCLLfR332ijMwxZXSJLLjSCkKIwLQjsVCnEcxQXPydPHEuMyQAIKph2I0Rf45l1LefnmEiZT0Oh1Z6Mw6D71M=) 2026-03-25 02:01:17.660313 | orchestrator | 2026-03-25 02:01:17.660324 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:17.660338 | orchestrator | Wednesday 25 March 2026 02:01:11 +0000 (0:00:01.294) 0:00:07.958 ******* 2026-03-25 02:01:17.660370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDj16eoNhfj0MsAWrPk8aNVPVvA0b0+8zHniWV3g98EHxNGu3iZs7yVlez8HMD4BSCv7KWwkNdIsBtYnACw/2bv9zW3LoIfaWFKxRZDA2WiVms14vM+pQq1x7bLfhOlHk/820AzGN04apiMa4YRC2dB2Jv+jZLVMnICN21e/mXmBRYNjibFKtxtH+mn48mWXIchU6Quis/pRLnvqZN8chHKyeH1KrI1a4QZ87rZgJxBYxTy07meIvNnfG6dKOlqD/4u/64HoeGdxnHG5T0vKmu4MykbZbR+btFviPonZP9Ds8gx/darcwcv+l5H9nGv4vRWzzZyAutV2Fb42pxFSqFqWj3YL5ASuJMbvHbYN+sZCFbXc2A+KfoDxWr9aVeC/OmP7sYrDdCYB5BPXnCwEQUhAyrV1kTN0ATFgMYYQAkG/asmjcEblXEEeBI3XMyn4pxQzAuQeiJb+Gry+gkK/VbxUymWu++eGoj56Ey6TzeVAz3636ohGs+tPQ+CTxEInHE=) 2026-03-25 02:01:17.660384 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIITqdKNFmBCv4wqF27Vf2SnIZsxVUzjBjuWF7PcH6EEK) 2026-03-25 02:01:17.660398 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzEsi8so4Tk74goGMlfV/4AvmsbYL6S8nn9Q2lRT096QWfX3VrfpEjCucKhM3T5Uz/6CR3EoDuvFUzjn+iiDPo=) 2026-03-25 02:01:17.660410 | orchestrator | 2026-03-25 02:01:17.660423 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:17.660435 | orchestrator | Wednesday 25 March 2026 02:01:13 +0000 (0:00:01.195) 0:00:09.153 ******* 2026-03-25 02:01:17.660450 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgRoynua+++VNCcAgVQcelv/hjXhRsKLD1Pv8reCiVDUfzWjg1vn++1IXj8hMmL9XBtVG8BXa1seC0bLG2kdF1M3DZO0eZgKeMEqS/6piNnTAY3RQnQZ2FwFBm7ZE/tCQoaQJsKcaLzB14ZLINJPAPXYz6qeEF/jOoRdwhHEdQ495NQWvl5cpE5uLxFnGBLu1OXA/28LYwiNqrOrX2GhcK0iqL0O3aMWPv8wP423ObChy9JL4+bDiEBN+74fQk8gy75bPyBq+JndFYbpAYzYN0KiRwYBMhThzBHKybFrqsBkgHQme1hrNoyzEPrPK8jcDLz5bK3ZUQHnSXR7A5b4xNhC1216DCxIGBKPe8l8IhvjO7TN4OgmHLQmu2Qvqt+DL5dxsVHKa7n+MOXNy3aVoa1PuTmft9lRb5XM5V0fdqCbBIcQi+syJL8DzWPqDae7cWcg3hAomLhA4P1MLKzGSqh1Pq2C7qrOPBPW79ynNwjsqjFF+gjT/GLAyv3FU7cF0=) 2026-03-25 02:01:17.660463 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBV1DG6IZzXUarLn81tMw0AoeN1yDtWrd6bfltal2zVMd1dv8SgbuJoZUg6hn2qZ0pWACAIhFNYAhu7CaZg6YKM=) 2026-03-25 02:01:17.660476 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO3FgnqQr7ZSO9x/CIKQ56uBRJxxuQrg5dGCkaiO1ioJ) 2026-03-25 02:01:17.660489 | orchestrator | 2026-03-25 02:01:17.660502 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:17.660514 | orchestrator | Wednesday 25 March 2026 02:01:14 +0000 (0:00:01.117) 0:00:10.271 ******* 2026-03-25 02:01:17.660528 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsZP869HOIYAXpabFTR6pz3I8puh40oi88KgnEAy9V2NVHUNBi/l8vzXIKFvn/w3JANn1JBIa5Nuhxwm7kHOmG20lQ90QC33gAqKs2tuL+95LvfPN2IUcQHfyQp8PmsEKlQY7MKtdTL2baT4imEXmP2Dv3hEYzhdvaOWKrQL61bVa5/sd+pFp6gIAua+pyoMp74QGoaMbFfJfMgMhm7sIgzLStk/A+jHIX8MRYAN+xT2/bYz7u7idFJmDtq7yL1WqAUHobbFzKB1uC/bqfzBFyf5/Sk0PCr9QfYvP5w7x7Vp1ppJbHazxYw7BnNfXlt9EI7DeIInENZB3v0ssAmq8OkX4SlzW2sRQNqc0o9urgwOXkTFNIqE642lCfFpMEQsab+wKWbvUYJ+UzckrXvqlFa4eP9Qbeh2rLRoLETJAAOmEFZdRmxoQHZ1WKsU7LH3WDzEpkQb9+2agyFUClD9IlhRUrtsCJFyjH003YuDW198DhBm9eQ7b0jY82LM7IMqE=) 2026-03-25 02:01:17.660547 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIdr+wCPua0e6SO7h8Xyr4hqWDfJgroZ1CHeAIOGEyzuyrvceJHU8YItbHC/J0Ma329omih0ImtGIxuIMlG6u9E=) 2026-03-25 02:01:17.660561 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAOGmRZedwgrt4TdF2gnbNhh7AxIL49mACmuwfBlVSA) 2026-03-25 02:01:17.660573 | orchestrator | 2026-03-25 02:01:17.660586 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:17.660599 | orchestrator | Wednesday 25 March 2026 02:01:15 +0000 (0:00:01.156) 0:00:11.428 ******* 2026-03-25 02:01:17.660679 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnaHnOEVQxzabin23NwZEeDcQwSicoOuBSUcZN3fRIKi2GfmiR5SYrKOr/e7yE50m9+fYjIhmGr3je4fQd7tiW4X5kNWB30BjFLlhg5Xi2FBQS9RC0VZ0yBdae49rlm4nNBfCh3nd3vkoUsWpVy+shfpzxrHSpLdnuoat4W7oEQBZryi1nlNIh4gDlw7NY7yCZw/RHfRSrfGxY0oq57/SDgU+4eOTvtks9Ty8TvoruWUtur/LEPAMlWeTvh0wjPBJl5yAXNMrV45sMJmCmUDjKFHpP2D5/drtKEWokdPPOtnbbN70kpZdWVpJcncLL7H7Sa5PxKS26tZjJ/vey8SMNhMhCUBDk6i/MQvNYYB7BPUUUWaKnzY9CHzWs/1sNos3Y8qYHu9BJzNT+dvUN7Ji4g1PAsQX5q9wWN6PO6ELyM+qKYw9+NugUkZjAoveJuTlys9lYQafrzfafNQv69VWgdX5h1UBMO99Iupn8AkcDavbvNBRuGDJk3FROCrBkNRk=) 2026-03-25 02:01:17.660694 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA5/jtfsrUSsyYAH6ykDjRIY+LUE1PzNDeMgQAz6CQAw9O8+Cas3PXefPAOIyBwTIuh4eJ01/X/qAb+5y6JKelQ=) 2026-03-25 02:01:17.660707 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMX017ECnTSSWA7lprQbYuFK8BSigrtU4j45Qh5msQxG) 2026-03-25 02:01:17.660719 | orchestrator | 2026-03-25 02:01:17.660732 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:17.660745 | orchestrator | Wednesday 25 March 2026 02:01:16 +0000 (0:00:01.093) 0:00:12.521 ******* 2026-03-25 02:01:17.660765 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIISTGEGDtj1cMYbrCCDNM7Z2YpwMjRtr29tUhjshNXnz) 2026-03-25 02:01:29.165382 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv+g1kevyPVPzBTh+eeg982C5zYoAtHWLjC5IxskyOJDanzY02lcVNaWZncQ6I1p2eTgXzZsMhqHolhZDvbvsfZl6eQRNEyKtUSQoo7HDlLcgIZlL7z36YnQz041EgrXlgH/waJHKppQ1VMu0H2yQNg1FeYSMc961HouiD1J/IQvqqfc6lWQ8USCWXa2EbGpWs2LMAYNopwlZn1+Vs+5D0y7YR5SHhpY5RtApjr0J12oJQJVbH6ruMy+fsb+K5s9PxPLLhdK3uy8p2kIEGSy6iJVAcwMUNmAIgyXhM0Ua7Au4At8YamcIrnWZ5Km4g4L+QH2CYGobbuoda5fq/qnV9T3WQaRoEdPurcjnG8cii4p4KKraq1UyLc/gqOcOE5s6hiREj1h67BeRt8B/v7yHI8e4nS4r3Tg1LfodwsGMYpIKwTUBUD54QPj4moAZc+vAv8oo0vxcMkGbNFbxXkzuKpEVQFeEIvgp2BvWVSNDj4gHlr0te8y6m3IENFnRgim0=) 2026-03-25 02:01:29.165508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM/wp2Mz3yIozbtVafcnJ2NSGdxYyy2LGEfplct+N9mfQ2hdaHDKVGtVWohE/DzYk2Ps3BL+fW9fw5dWoFn5BtI=) 2026-03-25 02:01:29.165520 | orchestrator | 2026-03-25 02:01:29.165529 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:29.165537 | orchestrator | Wednesday 25 March 2026 02:01:17 +0000 (0:00:01.190) 0:00:13.711 ******* 2026-03-25 02:01:29.165544 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/1GFZf77T0bCFU28SB6BqX+UlypbDIxxMeXdeUkxuukPL938a9c/I7nU5fEExYTZ5L/QOePfKv0/q0I1Ya+wVJ9nzwLzrdK9phCEr28w5fHuq8yFvSuanZpAKjjLWnrN4NtbHZwofoUE+VVwWLNrwhXXsxSVpZoZAZNA5Bd6VB+MEgnXGzITusB2ljHn9PtGKNI75Te3IWQ9mqM0/Xjf735a59jt5jUoYH0UsO+m1kYtC5qiIaqOljD6ZDVZtvCnaieDVYs9ITAZPg5Wt08HisOcfZpuG4s8c+zB81Ou+gJs0jyUPS20KqIW633EUOMcr2A4r0vtpU2Di+U0Fz7TrR9XT4xDYhQEDV6kgWUFSYtkK2VX8gjxoU2S0SMNuABNTmav7hQUviQAL9wRv94WG8OHvQdy5lw8fWMfoT2Xgfe0NdN06STRby98wVt9fzZwGxzNugYEQvFCHgbsEzWgQ/N26U9CTcfqn6m+/9O/OIKOcKXy152LCyFu22/RlEs0=) 2026-03-25 02:01:29.165552 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJXFxKpknxcwcF03SX/K0rfaUNaiUcNCB8d7/b0T1AUK) 2026-03-25 02:01:29.165579 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEGWOt5u7/zUJylmfty/HNy68mIzNEROMWYVdY8cObWL5wYu14bFGKKYFYuh5CYVmpLoeym5sQTHwhQLMcTmRdY=) 2026-03-25 02:01:29.165586 | orchestrator | 2026-03-25 02:01:29.165592 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-25 02:01:29.165600 | orchestrator | Wednesday 25 March 2026 02:01:18 +0000 (0:00:01.144) 0:00:14.855 ******* 2026-03-25 02:01:29.165607 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-25 02:01:29.165614 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-25 02:01:29.165620 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-25 02:01:29.165626 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-25 02:01:29.165633 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-25 02:01:29.165639 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-25 02:01:29.165645 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-25 02:01:29.165651 | orchestrator | 2026-03-25 02:01:29.165658 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-25 02:01:29.165667 | orchestrator | Wednesday 25 March 2026 02:01:24 +0000 (0:00:05.495) 0:00:20.351 ******* 2026-03-25 02:01:29.165678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-25 02:01:29.165690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-25 02:01:29.165700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-25 02:01:29.165710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-25 02:01:29.165719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-25 02:01:29.165728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-25 02:01:29.165738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-25 02:01:29.165747 | orchestrator | 2026-03-25 02:01:29.165770 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:29.165789 | orchestrator | Wednesday 25 March 2026 02:01:24 +0000 (0:00:00.203) 0:00:20.554 ******* 2026-03-25 02:01:29.165799 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChcwLanoeSAxApRU3Zhc2JymHbwEdlq2oZJ35xZlzrABVrvfXZsqOCVQ2Z+ZrM6PznjBaHWFfxdqTqvxmbEbx7+4LXxnvvfe92A4TvZajjWbqwLYkxiGcIA0PmkXkPnA/aaG5gGpboFFIowFN+vja948q/IDfeyLJtfmZHr4ecDsbWoyNirLOQ3IKWmS68rXqanaytwXDyd2vyYTlfpDgnYOgT1E+2t9dh6lpP2rs2/JkAj1Qy+k/gj7Y3tTAmywO9Q5Pv4B+feFUYPt40v+s4Q7sUeI7Dtr38jYZWdzh8I2VBWSqpGELTYdtsAtZOaUBhn8egBzzm2vqDXGjjCG9mpJynt5Wv1A01umWO4QBA+QPUqimcZ/qFl7m7l+tlQio61whWl9QepbHbeDC59VW+AzaIvdwLDavPHPzAMN/5DdhwHx3vP+h47kS/Ndt7i6YzL6EsagM0xZ3rXyU5uvKr/GkSb5HsBEeHcgoZO0Dn/XtCCXRxYh8aznyFfpdkT6s=) 2026-03-25 02:01:29.165808 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAtCLLfR332ijMwxZXSJLLjSCkKIwLQjsVCnEcxQXPydPHEuMyQAIKph2I0Rf45l1LefnmEiZT0Oh1Z6Mw6D71M=) 2026-03-25 02:01:29.165835 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDKVfdxdGiUuBQS37BzZqY4NFIq/mJf3NhsaGos+/Msg) 2026-03-25 02:01:29.165844 | orchestrator | 2026-03-25 02:01:29.165854 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:29.165864 | orchestrator | Wednesday 25 March 2026 02:01:25 +0000 (0:00:01.134) 0:00:21.689 ******* 2026-03-25 02:01:29.165874 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDj16eoNhfj0MsAWrPk8aNVPVvA0b0+8zHniWV3g98EHxNGu3iZs7yVlez8HMD4BSCv7KWwkNdIsBtYnACw/2bv9zW3LoIfaWFKxRZDA2WiVms14vM+pQq1x7bLfhOlHk/820AzGN04apiMa4YRC2dB2Jv+jZLVMnICN21e/mXmBRYNjibFKtxtH+mn48mWXIchU6Quis/pRLnvqZN8chHKyeH1KrI1a4QZ87rZgJxBYxTy07meIvNnfG6dKOlqD/4u/64HoeGdxnHG5T0vKmu4MykbZbR+btFviPonZP9Ds8gx/darcwcv+l5H9nGv4vRWzzZyAutV2Fb42pxFSqFqWj3YL5ASuJMbvHbYN+sZCFbXc2A+KfoDxWr9aVeC/OmP7sYrDdCYB5BPXnCwEQUhAyrV1kTN0ATFgMYYQAkG/asmjcEblXEEeBI3XMyn4pxQzAuQeiJb+Gry+gkK/VbxUymWu++eGoj56Ey6TzeVAz3636ohGs+tPQ+CTxEInHE=) 2026-03-25 02:01:29.165885 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzEsi8so4Tk74goGMlfV/4AvmsbYL6S8nn9Q2lRT096QWfX3VrfpEjCucKhM3T5Uz/6CR3EoDuvFUzjn+iiDPo=) 2026-03-25 02:01:29.165895 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIITqdKNFmBCv4wqF27Vf2SnIZsxVUzjBjuWF7PcH6EEK) 2026-03-25 02:01:29.165903 | orchestrator | 2026-03-25 02:01:29.165914 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:29.165923 | orchestrator | Wednesday 25 March 2026 02:01:26 +0000 (0:00:01.117) 0:00:22.806 ******* 2026-03-25 02:01:29.165933 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgRoynua+++VNCcAgVQcelv/hjXhRsKLD1Pv8reCiVDUfzWjg1vn++1IXj8hMmL9XBtVG8BXa1seC0bLG2kdF1M3DZO0eZgKeMEqS/6piNnTAY3RQnQZ2FwFBm7ZE/tCQoaQJsKcaLzB14ZLINJPAPXYz6qeEF/jOoRdwhHEdQ495NQWvl5cpE5uLxFnGBLu1OXA/28LYwiNqrOrX2GhcK0iqL0O3aMWPv8wP423ObChy9JL4+bDiEBN+74fQk8gy75bPyBq+JndFYbpAYzYN0KiRwYBMhThzBHKybFrqsBkgHQme1hrNoyzEPrPK8jcDLz5bK3ZUQHnSXR7A5b4xNhC1216DCxIGBKPe8l8IhvjO7TN4OgmHLQmu2Qvqt+DL5dxsVHKa7n+MOXNy3aVoa1PuTmft9lRb5XM5V0fdqCbBIcQi+syJL8DzWPqDae7cWcg3hAomLhA4P1MLKzGSqh1Pq2C7qrOPBPW79ynNwjsqjFF+gjT/GLAyv3FU7cF0=) 2026-03-25 02:01:29.165944 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBV1DG6IZzXUarLn81tMw0AoeN1yDtWrd6bfltal2zVMd1dv8SgbuJoZUg6hn2qZ0pWACAIhFNYAhu7CaZg6YKM=) 2026-03-25 02:01:29.165955 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO3FgnqQr7ZSO9x/CIKQ56uBRJxxuQrg5dGCkaiO1ioJ) 2026-03-25 02:01:29.165966 | orchestrator | 2026-03-25 02:01:29.165999 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:29.166008 | orchestrator | Wednesday 25 March 2026 02:01:27 +0000 (0:00:01.233) 0:00:24.039 ******* 2026-03-25 02:01:29.166072 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsZP869HOIYAXpabFTR6pz3I8puh40oi88KgnEAy9V2NVHUNBi/l8vzXIKFvn/w3JANn1JBIa5Nuhxwm7kHOmG20lQ90QC33gAqKs2tuL+95LvfPN2IUcQHfyQp8PmsEKlQY7MKtdTL2baT4imEXmP2Dv3hEYzhdvaOWKrQL61bVa5/sd+pFp6gIAua+pyoMp74QGoaMbFfJfMgMhm7sIgzLStk/A+jHIX8MRYAN+xT2/bYz7u7idFJmDtq7yL1WqAUHobbFzKB1uC/bqfzBFyf5/Sk0PCr9QfYvP5w7x7Vp1ppJbHazxYw7BnNfXlt9EI7DeIInENZB3v0ssAmq8OkX4SlzW2sRQNqc0o9urgwOXkTFNIqE642lCfFpMEQsab+wKWbvUYJ+UzckrXvqlFa4eP9Qbeh2rLRoLETJAAOmEFZdRmxoQHZ1WKsU7LH3WDzEpkQb9+2agyFUClD9IlhRUrtsCJFyjH003YuDW198DhBm9eQ7b0jY82LM7IMqE=) 2026-03-25 02:01:34.239217 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIdr+wCPua0e6SO7h8Xyr4hqWDfJgroZ1CHeAIOGEyzuyrvceJHU8YItbHC/J0Ma329omih0ImtGIxuIMlG6u9E=) 2026-03-25 02:01:34.239345 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAOGmRZedwgrt4TdF2gnbNhh7AxIL49mACmuwfBlVSA) 2026-03-25 02:01:34.239395 | orchestrator | 2026-03-25 02:01:34.239413 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:34.239430 | orchestrator | Wednesday 25 March 2026 02:01:29 +0000 (0:00:01.182) 0:00:25.222 ******* 2026-03-25 02:01:34.239444 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA5/jtfsrUSsyYAH6ykDjRIY+LUE1PzNDeMgQAz6CQAw9O8+Cas3PXefPAOIyBwTIuh4eJ01/X/qAb+5y6JKelQ=) 2026-03-25 02:01:34.239488 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnaHnOEVQxzabin23NwZEeDcQwSicoOuBSUcZN3fRIKi2GfmiR5SYrKOr/e7yE50m9+fYjIhmGr3je4fQd7tiW4X5kNWB30BjFLlhg5Xi2FBQS9RC0VZ0yBdae49rlm4nNBfCh3nd3vkoUsWpVy+shfpzxrHSpLdnuoat4W7oEQBZryi1nlNIh4gDlw7NY7yCZw/RHfRSrfGxY0oq57/SDgU+4eOTvtks9Ty8TvoruWUtur/LEPAMlWeTvh0wjPBJl5yAXNMrV45sMJmCmUDjKFHpP2D5/drtKEWokdPPOtnbbN70kpZdWVpJcncLL7H7Sa5PxKS26tZjJ/vey8SMNhMhCUBDk6i/MQvNYYB7BPUUUWaKnzY9CHzWs/1sNos3Y8qYHu9BJzNT+dvUN7Ji4g1PAsQX5q9wWN6PO6ELyM+qKYw9+NugUkZjAoveJuTlys9lYQafrzfafNQv69VWgdX5h1UBMO99Iupn8AkcDavbvNBRuGDJk3FROCrBkNRk=) 2026-03-25 02:01:34.239505 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMX017ECnTSSWA7lprQbYuFK8BSigrtU4j45Qh5msQxG) 2026-03-25 02:01:34.239520 | orchestrator | 2026-03-25 02:01:34.239536 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:34.239551 | orchestrator | Wednesday 25 March 2026 02:01:30 +0000 (0:00:01.223) 0:00:26.446 ******* 2026-03-25 02:01:34.239566 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIISTGEGDtj1cMYbrCCDNM7Z2YpwMjRtr29tUhjshNXnz) 2026-03-25 02:01:34.239582 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv+g1kevyPVPzBTh+eeg982C5zYoAtHWLjC5IxskyOJDanzY02lcVNaWZncQ6I1p2eTgXzZsMhqHolhZDvbvsfZl6eQRNEyKtUSQoo7HDlLcgIZlL7z36YnQz041EgrXlgH/waJHKppQ1VMu0H2yQNg1FeYSMc961HouiD1J/IQvqqfc6lWQ8USCWXa2EbGpWs2LMAYNopwlZn1+Vs+5D0y7YR5SHhpY5RtApjr0J12oJQJVbH6ruMy+fsb+K5s9PxPLLhdK3uy8p2kIEGSy6iJVAcwMUNmAIgyXhM0Ua7Au4At8YamcIrnWZ5Km4g4L+QH2CYGobbuoda5fq/qnV9T3WQaRoEdPurcjnG8cii4p4KKraq1UyLc/gqOcOE5s6hiREj1h67BeRt8B/v7yHI8e4nS4r3Tg1LfodwsGMYpIKwTUBUD54QPj4moAZc+vAv8oo0vxcMkGbNFbxXkzuKpEVQFeEIvgp2BvWVSNDj4gHlr0te8y6m3IENFnRgim0=) 2026-03-25 02:01:34.239596 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM/wp2Mz3yIozbtVafcnJ2NSGdxYyy2LGEfplct+N9mfQ2hdaHDKVGtVWohE/DzYk2Ps3BL+fW9fw5dWoFn5BtI=) 2026-03-25 02:01:34.239610 | orchestrator | 2026-03-25 02:01:34.239624 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-25 02:01:34.239640 | orchestrator | Wednesday 25 March 2026 02:01:31 +0000 (0:00:01.143) 0:00:27.589 ******* 2026-03-25 02:01:34.239656 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEGWOt5u7/zUJylmfty/HNy68mIzNEROMWYVdY8cObWL5wYu14bFGKKYFYuh5CYVmpLoeym5sQTHwhQLMcTmRdY=) 2026-03-25 02:01:34.239691 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/1GFZf77T0bCFU28SB6BqX+UlypbDIxxMeXdeUkxuukPL938a9c/I7nU5fEExYTZ5L/QOePfKv0/q0I1Ya+wVJ9nzwLzrdK9phCEr28w5fHuq8yFvSuanZpAKjjLWnrN4NtbHZwofoUE+VVwWLNrwhXXsxSVpZoZAZNA5Bd6VB+MEgnXGzITusB2ljHn9PtGKNI75Te3IWQ9mqM0/Xjf735a59jt5jUoYH0UsO+m1kYtC5qiIaqOljD6ZDVZtvCnaieDVYs9ITAZPg5Wt08HisOcfZpuG4s8c+zB81Ou+gJs0jyUPS20KqIW633EUOMcr2A4r0vtpU2Di+U0Fz7TrR9XT4xDYhQEDV6kgWUFSYtkK2VX8gjxoU2S0SMNuABNTmav7hQUviQAL9wRv94WG8OHvQdy5lw8fWMfoT2Xgfe0NdN06STRby98wVt9fzZwGxzNugYEQvFCHgbsEzWgQ/N26U9CTcfqn6m+/9O/OIKOcKXy152LCyFu22/RlEs0=) 2026-03-25 02:01:34.239707 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJXFxKpknxcwcF03SX/K0rfaUNaiUcNCB8d7/b0T1AUK) 2026-03-25 02:01:34.239721 | orchestrator | 2026-03-25 02:01:34.239738 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-25 02:01:34.239767 | orchestrator | Wednesday 25 March 2026 02:01:32 +0000 (0:00:01.169) 0:00:28.759 ******* 2026-03-25 02:01:34.239781 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-25 02:01:34.239793 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-25 02:01:34.239824 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-25 02:01:34.239835 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-25 02:01:34.239845 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-25 02:01:34.239856 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-25 02:01:34.239866 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-25 02:01:34.239876 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:01:34.239886 | orchestrator | 2026-03-25 02:01:34.239897 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-25 02:01:34.239907 | orchestrator | Wednesday 25 March 2026 02:01:32 +0000 (0:00:00.195) 0:00:28.954 ******* 2026-03-25 02:01:34.239918 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:01:34.239928 | orchestrator | 2026-03-25 02:01:34.239938 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-25 02:01:34.239949 | orchestrator | Wednesday 25 March 2026 02:01:32 +0000 (0:00:00.077) 0:00:29.032 ******* 2026-03-25 02:01:34.239965 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:01:34.239998 | orchestrator | 2026-03-25 02:01:34.240009 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-25 02:01:34.240019 | orchestrator | Wednesday 25 March 2026 02:01:33 +0000 (0:00:00.066) 0:00:29.098 ******* 2026-03-25 02:01:34.240030 | orchestrator | changed: [testbed-manager] 2026-03-25 02:01:34.240040 | orchestrator | 2026-03-25 02:01:34.240050 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:01:34.240061 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 02:01:34.240073 | orchestrator | 2026-03-25 02:01:34.240084 | orchestrator | 2026-03-25 02:01:34.240101 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:01:34.240121 | orchestrator | Wednesday 25 March 2026 02:01:33 +0000 (0:00:00.923) 0:00:30.022 ******* 2026-03-25 02:01:34.240138 | orchestrator | =============================================================================== 2026-03-25 02:01:34.240152 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.31s 2026-03-25 02:01:34.240167 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.50s 2026-03-25 02:01:34.240185 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.29s 2026-03-25 02:01:34.240200 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2026-03-25 02:01:34.240216 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-03-25 02:01:34.240230 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-03-25 02:01:34.240246 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-03-25 02:01:34.240263 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-25 02:01:34.240278 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-03-25 02:01:34.240293 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-03-25 02:01:34.240310 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-25 02:01:34.240325 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-25 02:01:34.240341 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-25 02:01:34.240357 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-25 02:01:34.240387 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-25 02:01:34.240403 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-25 02:01:34.240418 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.92s 2026-03-25 02:01:34.240434 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-03-25 02:01:34.240449 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2026-03-25 02:01:34.240464 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-03-25 02:01:34.627161 | orchestrator | + osism apply squid 2026-03-25 02:01:46.819160 | orchestrator | 2026-03-25 02:01:46 | INFO  | Task 0ad4704a-4c05-4fa2-b313-5cc7d20f2cb0 (squid) was prepared for execution. 2026-03-25 02:01:46.819313 | orchestrator | 2026-03-25 02:01:46 | INFO  | It takes a moment until task 0ad4704a-4c05-4fa2-b313-5cc7d20f2cb0 (squid) has been started and output is visible here. 2026-03-25 02:03:54.919040 | orchestrator | 2026-03-25 02:03:54.919190 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-25 02:03:54.919206 | orchestrator | 2026-03-25 02:03:54.919217 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-25 02:03:54.919228 | orchestrator | Wednesday 25 March 2026 02:01:51 +0000 (0:00:00.174) 0:00:00.174 ******* 2026-03-25 02:03:54.919239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 02:03:54.919250 | orchestrator | 2026-03-25 02:03:54.919260 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-25 02:03:54.919270 | orchestrator | Wednesday 25 March 2026 02:01:51 +0000 (0:00:00.100) 0:00:00.275 ******* 2026-03-25 02:03:54.919281 | orchestrator | ok: [testbed-manager] 2026-03-25 02:03:54.919292 | orchestrator | 2026-03-25 02:03:54.919302 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-25 02:03:54.919312 | orchestrator | Wednesday 25 March 2026 02:01:53 +0000 (0:00:01.875) 0:00:02.151 ******* 2026-03-25 02:03:54.919323 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-25 02:03:54.919333 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-25 02:03:54.919343 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-25 02:03:54.919353 | orchestrator | 2026-03-25 02:03:54.919363 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-25 02:03:54.919373 | orchestrator | Wednesday 25 March 2026 02:01:54 +0000 (0:00:01.259) 0:00:03.410 ******* 2026-03-25 02:03:54.919383 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-25 02:03:54.919393 | orchestrator | 2026-03-25 02:03:54.919403 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-25 02:03:54.919413 | orchestrator | Wednesday 25 March 2026 02:01:56 +0000 (0:00:01.236) 0:00:04.646 ******* 2026-03-25 02:03:54.919423 | orchestrator | ok: [testbed-manager] 2026-03-25 02:03:54.919433 | orchestrator | 2026-03-25 02:03:54.919443 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-25 02:03:54.919453 | orchestrator | Wednesday 25 March 2026 02:01:56 +0000 (0:00:00.372) 0:00:05.019 ******* 2026-03-25 02:03:54.919464 | orchestrator | changed: [testbed-manager] 2026-03-25 02:03:54.919474 | orchestrator | 2026-03-25 02:03:54.919484 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-25 02:03:54.919494 | orchestrator | Wednesday 25 March 2026 02:01:57 +0000 (0:00:00.947) 0:00:05.966 ******* 2026-03-25 02:03:54.919506 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-25 02:03:54.919523 | orchestrator | ok: [testbed-manager] 2026-03-25 02:03:54.919535 | orchestrator | 2026-03-25 02:03:54.919547 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-25 02:03:54.919589 | orchestrator | Wednesday 25 March 2026 02:02:37 +0000 (0:00:40.090) 0:00:46.057 ******* 2026-03-25 02:03:54.919601 | orchestrator | changed: [testbed-manager] 2026-03-25 02:03:54.919612 | orchestrator | 2026-03-25 02:03:54.919624 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-25 02:03:54.919635 | orchestrator | Wednesday 25 March 2026 02:02:53 +0000 (0:00:16.186) 0:01:02.244 ******* 2026-03-25 02:03:54.919646 | orchestrator | Pausing for 60 seconds 2026-03-25 02:03:54.919658 | orchestrator | changed: [testbed-manager] 2026-03-25 02:03:54.919669 | orchestrator | 2026-03-25 02:03:54.919681 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-25 02:03:54.919692 | orchestrator | Wednesday 25 March 2026 02:03:53 +0000 (0:01:00.093) 0:02:02.338 ******* 2026-03-25 02:03:54.919703 | orchestrator | ok: [testbed-manager] 2026-03-25 02:03:54.919714 | orchestrator | 2026-03-25 02:03:54.919725 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-25 02:03:54.919737 | orchestrator | Wednesday 25 March 2026 02:03:53 +0000 (0:00:00.073) 0:02:02.411 ******* 2026-03-25 02:03:54.919749 | orchestrator | changed: [testbed-manager] 2026-03-25 02:03:54.919760 | orchestrator | 2026-03-25 02:03:54.919771 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:03:54.919783 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:03:54.919794 | orchestrator | 2026-03-25 02:03:54.919805 | orchestrator | 2026-03-25 02:03:54.919817 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:03:54.919828 | orchestrator | Wednesday 25 March 2026 02:03:54 +0000 (0:00:00.698) 0:02:03.110 ******* 2026-03-25 02:03:54.919845 | orchestrator | =============================================================================== 2026-03-25 02:03:54.919863 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-25 02:03:54.919877 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 40.09s 2026-03-25 02:03:54.919894 | orchestrator | osism.services.squid : Restart squid service --------------------------- 16.19s 2026-03-25 02:03:54.919939 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.88s 2026-03-25 02:03:54.919958 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.26s 2026-03-25 02:03:54.919976 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.24s 2026-03-25 02:03:54.920009 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2026-03-25 02:03:54.920019 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.70s 2026-03-25 02:03:54.920029 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-03-25 02:03:54.920039 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-03-25 02:03:54.920048 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-25 02:03:55.312089 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-25 02:03:55.312220 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-25 02:03:55.363926 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-25 02:03:55.364093 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-25 02:03:55.369372 | orchestrator | + set -e 2026-03-25 02:03:55.369452 | orchestrator | + NAMESPACE=kolla/release 2026-03-25 02:03:55.369464 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-25 02:03:55.374339 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-25 02:03:55.432618 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-25 02:03:55.433519 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-25 02:04:07.751363 | orchestrator | 2026-03-25 02:04:07 | INFO  | Task 28d4611d-d6ce-4307-ba50-7754c728c3c2 (operator) was prepared for execution. 2026-03-25 02:04:07.751499 | orchestrator | 2026-03-25 02:04:07 | INFO  | It takes a moment until task 28d4611d-d6ce-4307-ba50-7754c728c3c2 (operator) has been started and output is visible here. 2026-03-25 02:04:24.463465 | orchestrator | 2026-03-25 02:04:24.463572 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-25 02:04:24.463585 | orchestrator | 2026-03-25 02:04:24.463593 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 02:04:24.463600 | orchestrator | Wednesday 25 March 2026 02:04:12 +0000 (0:00:00.170) 0:00:00.170 ******* 2026-03-25 02:04:24.463608 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:04:24.463615 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:04:24.463622 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:04:24.463630 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:04:24.463637 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:04:24.463644 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:04:24.463650 | orchestrator | 2026-03-25 02:04:24.463657 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-25 02:04:24.463665 | orchestrator | Wednesday 25 March 2026 02:04:15 +0000 (0:00:03.315) 0:00:03.485 ******* 2026-03-25 02:04:24.463671 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:04:24.463678 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:04:24.463684 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:04:24.463707 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:04:24.463715 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:04:24.463721 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:04:24.463728 | orchestrator | 2026-03-25 02:04:24.463734 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-25 02:04:24.463740 | orchestrator | 2026-03-25 02:04:24.463746 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-25 02:04:24.463752 | orchestrator | Wednesday 25 March 2026 02:04:16 +0000 (0:00:00.761) 0:00:04.247 ******* 2026-03-25 02:04:24.463758 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:04:24.463764 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:04:24.463770 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:04:24.463776 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:04:24.463783 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:04:24.463789 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:04:24.463796 | orchestrator | 2026-03-25 02:04:24.463802 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-25 02:04:24.463809 | orchestrator | Wednesday 25 March 2026 02:04:16 +0000 (0:00:00.191) 0:00:04.438 ******* 2026-03-25 02:04:24.463814 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:04:24.463821 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:04:24.463827 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:04:24.463833 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:04:24.463839 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:04:24.463846 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:04:24.463852 | orchestrator | 2026-03-25 02:04:24.463858 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-25 02:04:24.463864 | orchestrator | Wednesday 25 March 2026 02:04:16 +0000 (0:00:00.180) 0:00:04.619 ******* 2026-03-25 02:04:24.463871 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:04:24.463879 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:04:24.463885 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:04:24.463892 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:04:24.463899 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:04:24.463905 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:04:24.463912 | orchestrator | 2026-03-25 02:04:24.463919 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-25 02:04:24.463926 | orchestrator | Wednesday 25 March 2026 02:04:17 +0000 (0:00:00.620) 0:00:05.240 ******* 2026-03-25 02:04:24.463932 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:04:24.463939 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:04:24.463944 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:04:24.463950 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:04:24.463957 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:04:24.463963 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:04:24.464014 | orchestrator | 2026-03-25 02:04:24.464022 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-25 02:04:24.464029 | orchestrator | Wednesday 25 March 2026 02:04:18 +0000 (0:00:00.773) 0:00:06.014 ******* 2026-03-25 02:04:24.464035 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-25 02:04:24.464042 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-25 02:04:24.464048 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-25 02:04:24.464054 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-25 02:04:24.464061 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-25 02:04:24.464067 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-25 02:04:24.464073 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-25 02:04:24.464079 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-25 02:04:24.464085 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-25 02:04:24.464092 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-25 02:04:24.464099 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-25 02:04:24.464106 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-25 02:04:24.464112 | orchestrator | 2026-03-25 02:04:24.464119 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-25 02:04:24.464125 | orchestrator | Wednesday 25 March 2026 02:04:19 +0000 (0:00:01.246) 0:00:07.260 ******* 2026-03-25 02:04:24.464132 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:04:24.464139 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:04:24.464146 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:04:24.464152 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:04:24.464158 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:04:24.464164 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:04:24.464170 | orchestrator | 2026-03-25 02:04:24.464177 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-25 02:04:24.464185 | orchestrator | Wednesday 25 March 2026 02:04:20 +0000 (0:00:01.272) 0:00:08.533 ******* 2026-03-25 02:04:24.464192 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-25 02:04:24.464199 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-25 02:04:24.464205 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-25 02:04:24.464212 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-25 02:04:24.464236 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-25 02:04:24.464243 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-25 02:04:24.464250 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-25 02:04:24.464256 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-25 02:04:24.464263 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-25 02:04:24.464270 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-25 02:04:24.464277 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-25 02:04:24.464283 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-25 02:04:24.464290 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-25 02:04:24.464295 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-25 02:04:24.464301 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-25 02:04:24.464308 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-25 02:04:24.464316 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-25 02:04:24.464322 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-25 02:04:24.464328 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-25 02:04:24.464334 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-25 02:04:24.464350 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-25 02:04:24.464357 | orchestrator | 2026-03-25 02:04:24.464363 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-25 02:04:24.464370 | orchestrator | Wednesday 25 March 2026 02:04:22 +0000 (0:00:01.225) 0:00:09.758 ******* 2026-03-25 02:04:24.464376 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:04:24.464382 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:04:24.464388 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:04:24.464393 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:04:24.464399 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:04:24.464405 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:04:24.464412 | orchestrator | 2026-03-25 02:04:24.464416 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-25 02:04:24.464421 | orchestrator | Wednesday 25 March 2026 02:04:22 +0000 (0:00:00.206) 0:00:09.965 ******* 2026-03-25 02:04:24.464425 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:04:24.464429 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:04:24.464432 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:04:24.464436 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:04:24.464440 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:04:24.464444 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:04:24.464448 | orchestrator | 2026-03-25 02:04:24.464452 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-25 02:04:24.464456 | orchestrator | Wednesday 25 March 2026 02:04:22 +0000 (0:00:00.217) 0:00:10.182 ******* 2026-03-25 02:04:24.464460 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:04:24.464463 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:04:24.464467 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:04:24.464471 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:04:24.464475 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:04:24.464479 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:04:24.464482 | orchestrator | 2026-03-25 02:04:24.464486 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-25 02:04:24.464490 | orchestrator | Wednesday 25 March 2026 02:04:23 +0000 (0:00:00.594) 0:00:10.777 ******* 2026-03-25 02:04:24.464494 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:04:24.464498 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:04:24.464502 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:04:24.464505 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:04:24.464509 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:04:24.464513 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:04:24.464517 | orchestrator | 2026-03-25 02:04:24.464520 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-25 02:04:24.464524 | orchestrator | Wednesday 25 March 2026 02:04:23 +0000 (0:00:00.207) 0:00:10.985 ******* 2026-03-25 02:04:24.464528 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-25 02:04:24.464541 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:04:24.464545 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-25 02:04:24.464549 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:04:24.464553 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-25 02:04:24.464557 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:04:24.464560 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-25 02:04:24.464564 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:04:24.464568 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-25 02:04:24.464572 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:04:24.464576 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-25 02:04:24.464580 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:04:24.464583 | orchestrator | 2026-03-25 02:04:24.464587 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-25 02:04:24.464591 | orchestrator | Wednesday 25 March 2026 02:04:24 +0000 (0:00:00.731) 0:00:11.716 ******* 2026-03-25 02:04:24.464598 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:04:24.464602 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:04:24.464606 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:04:24.464610 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:04:24.464614 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:04:24.464617 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:04:24.464621 | orchestrator | 2026-03-25 02:04:24.464625 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-25 02:04:24.464629 | orchestrator | Wednesday 25 March 2026 02:04:24 +0000 (0:00:00.186) 0:00:11.903 ******* 2026-03-25 02:04:24.464633 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:04:24.464637 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:04:24.464640 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:04:24.464644 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:04:24.464654 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:04:25.916919 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:04:25.917070 | orchestrator | 2026-03-25 02:04:25.917091 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-25 02:04:25.917107 | orchestrator | Wednesday 25 March 2026 02:04:24 +0000 (0:00:00.191) 0:00:12.095 ******* 2026-03-25 02:04:25.917121 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:04:25.917135 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:04:25.917148 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:04:25.917173 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:04:25.917195 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:04:25.917208 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:04:25.917221 | orchestrator | 2026-03-25 02:04:25.917234 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-25 02:04:25.917248 | orchestrator | Wednesday 25 March 2026 02:04:24 +0000 (0:00:00.183) 0:00:12.279 ******* 2026-03-25 02:04:25.917262 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:04:25.917285 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:04:25.917327 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:04:25.917341 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:04:25.917354 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:04:25.917366 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:04:25.917380 | orchestrator | 2026-03-25 02:04:25.917392 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-25 02:04:25.917406 | orchestrator | Wednesday 25 March 2026 02:04:25 +0000 (0:00:00.649) 0:00:12.929 ******* 2026-03-25 02:04:25.917419 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:04:25.917432 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:04:25.917446 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:04:25.917460 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:04:25.917473 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:04:25.917487 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:04:25.917500 | orchestrator | 2026-03-25 02:04:25.917514 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:04:25.917530 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 02:04:25.917546 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 02:04:25.917560 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 02:04:25.917574 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 02:04:25.917589 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 02:04:25.917628 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 02:04:25.917643 | orchestrator | 2026-03-25 02:04:25.917656 | orchestrator | 2026-03-25 02:04:25.917669 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:04:25.917683 | orchestrator | Wednesday 25 March 2026 02:04:25 +0000 (0:00:00.287) 0:00:13.216 ******* 2026-03-25 02:04:25.917697 | orchestrator | =============================================================================== 2026-03-25 02:04:25.917710 | orchestrator | Gathering Facts --------------------------------------------------------- 3.32s 2026-03-25 02:04:25.917720 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2026-03-25 02:04:25.917730 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2026-03-25 02:04:25.917740 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.23s 2026-03-25 02:04:25.917751 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2026-03-25 02:04:25.917765 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2026-03-25 02:04:25.917779 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-03-25 02:04:25.917793 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-03-25 02:04:25.917806 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2026-03-25 02:04:25.917820 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-03-25 02:04:25.917831 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2026-03-25 02:04:25.917841 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.22s 2026-03-25 02:04:25.917851 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2026-03-25 02:04:25.917861 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.21s 2026-03-25 02:04:25.917871 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-03-25 02:04:25.917883 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-03-25 02:04:25.917897 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2026-03-25 02:04:25.917910 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-03-25 02:04:25.917924 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-03-25 02:04:26.353004 | orchestrator | + osism apply --environment custom facts 2026-03-25 02:04:28.592422 | orchestrator | 2026-03-25 02:04:28 | INFO  | Trying to run play facts in environment custom 2026-03-25 02:04:38.723427 | orchestrator | 2026-03-25 02:04:38 | INFO  | Task f5836793-ea77-487d-b756-97d689d0b0ad (facts) was prepared for execution. 2026-03-25 02:04:38.723542 | orchestrator | 2026-03-25 02:04:38 | INFO  | It takes a moment until task f5836793-ea77-487d-b756-97d689d0b0ad (facts) has been started and output is visible here. 2026-03-25 02:05:20.683933 | orchestrator | 2026-03-25 02:05:20.684192 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-25 02:05:20.684223 | orchestrator | 2026-03-25 02:05:20.684296 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-25 02:05:20.684318 | orchestrator | Wednesday 25 March 2026 02:04:43 +0000 (0:00:00.097) 0:00:00.097 ******* 2026-03-25 02:05:20.684338 | orchestrator | ok: [testbed-manager] 2026-03-25 02:05:20.684358 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:05:20.684379 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:05:20.684398 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:05:20.684417 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:05:20.684435 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:05:20.684488 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:05:20.684510 | orchestrator | 2026-03-25 02:05:20.684531 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-25 02:05:20.684551 | orchestrator | Wednesday 25 March 2026 02:04:44 +0000 (0:00:01.356) 0:00:01.453 ******* 2026-03-25 02:05:20.684569 | orchestrator | ok: [testbed-manager] 2026-03-25 02:05:20.684590 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:05:20.684610 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:05:20.684646 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:05:20.684665 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:05:20.684683 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:05:20.684702 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:05:20.684721 | orchestrator | 2026-03-25 02:05:20.684741 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-25 02:05:20.684761 | orchestrator | 2026-03-25 02:05:20.684781 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-25 02:05:20.684800 | orchestrator | Wednesday 25 March 2026 02:04:45 +0000 (0:00:01.206) 0:00:02.659 ******* 2026-03-25 02:05:20.684820 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:20.684838 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:20.684858 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:20.684877 | orchestrator | 2026-03-25 02:05:20.684895 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-25 02:05:20.684914 | orchestrator | Wednesday 25 March 2026 02:04:45 +0000 (0:00:00.123) 0:00:02.783 ******* 2026-03-25 02:05:20.684932 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:20.684950 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:20.684967 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:20.685014 | orchestrator | 2026-03-25 02:05:20.685033 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-25 02:05:20.685053 | orchestrator | Wednesday 25 March 2026 02:04:46 +0000 (0:00:00.234) 0:00:03.017 ******* 2026-03-25 02:05:20.685070 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:20.685088 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:20.685099 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:20.685110 | orchestrator | 2026-03-25 02:05:20.685121 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-25 02:05:20.685133 | orchestrator | Wednesday 25 March 2026 02:04:46 +0000 (0:00:00.256) 0:00:03.274 ******* 2026-03-25 02:05:20.685146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:05:20.685158 | orchestrator | 2026-03-25 02:05:20.685169 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-25 02:05:20.685180 | orchestrator | Wednesday 25 March 2026 02:04:46 +0000 (0:00:00.191) 0:00:03.465 ******* 2026-03-25 02:05:20.685191 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:20.685202 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:20.685213 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:20.685223 | orchestrator | 2026-03-25 02:05:20.685234 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-25 02:05:20.685245 | orchestrator | Wednesday 25 March 2026 02:04:47 +0000 (0:00:00.430) 0:00:03.896 ******* 2026-03-25 02:05:20.685256 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:05:20.685267 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:05:20.685278 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:05:20.685289 | orchestrator | 2026-03-25 02:05:20.685300 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-25 02:05:20.685311 | orchestrator | Wednesday 25 March 2026 02:04:47 +0000 (0:00:00.134) 0:00:04.030 ******* 2026-03-25 02:05:20.685322 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:05:20.685333 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:05:20.685344 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:05:20.685355 | orchestrator | 2026-03-25 02:05:20.685367 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-25 02:05:20.685400 | orchestrator | Wednesday 25 March 2026 02:04:48 +0000 (0:00:01.061) 0:00:05.091 ******* 2026-03-25 02:05:20.685419 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:20.685436 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:20.685453 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:20.685471 | orchestrator | 2026-03-25 02:05:20.685487 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-25 02:05:20.685504 | orchestrator | Wednesday 25 March 2026 02:04:48 +0000 (0:00:00.462) 0:00:05.554 ******* 2026-03-25 02:05:20.685522 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:05:20.685540 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:05:20.685557 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:05:20.685575 | orchestrator | 2026-03-25 02:05:20.685592 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-25 02:05:20.685671 | orchestrator | Wednesday 25 March 2026 02:04:49 +0000 (0:00:01.080) 0:00:06.634 ******* 2026-03-25 02:05:20.685693 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:05:20.685713 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:05:20.685730 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:05:20.685748 | orchestrator | 2026-03-25 02:05:20.685763 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-25 02:05:20.685774 | orchestrator | Wednesday 25 March 2026 02:05:04 +0000 (0:00:15.079) 0:00:21.714 ******* 2026-03-25 02:05:20.685785 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:05:20.685796 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:05:20.685807 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:05:20.685832 | orchestrator | 2026-03-25 02:05:20.685843 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-25 02:05:20.685878 | orchestrator | Wednesday 25 March 2026 02:05:04 +0000 (0:00:00.094) 0:00:21.808 ******* 2026-03-25 02:05:20.685891 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:05:20.685902 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:05:20.685913 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:05:20.685924 | orchestrator | 2026-03-25 02:05:20.685935 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-25 02:05:20.685952 | orchestrator | Wednesday 25 March 2026 02:05:11 +0000 (0:00:06.933) 0:00:28.742 ******* 2026-03-25 02:05:20.685964 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:20.686002 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:20.686075 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:20.686087 | orchestrator | 2026-03-25 02:05:20.686098 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-25 02:05:20.686190 | orchestrator | Wednesday 25 March 2026 02:05:12 +0000 (0:00:00.483) 0:00:29.225 ******* 2026-03-25 02:05:20.686204 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-25 02:05:20.686215 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-25 02:05:20.686226 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-25 02:05:20.686237 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-25 02:05:20.686247 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-25 02:05:20.686258 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-25 02:05:20.686269 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-25 02:05:20.686280 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-25 02:05:20.686290 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-25 02:05:20.686301 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-25 02:05:20.686312 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-25 02:05:20.686323 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-25 02:05:20.686334 | orchestrator | 2026-03-25 02:05:20.686351 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-25 02:05:20.686384 | orchestrator | Wednesday 25 March 2026 02:05:15 +0000 (0:00:03.372) 0:00:32.598 ******* 2026-03-25 02:05:20.686402 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:20.686422 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:20.686443 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:20.686463 | orchestrator | 2026-03-25 02:05:20.686482 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-25 02:05:20.686500 | orchestrator | 2026-03-25 02:05:20.686520 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-25 02:05:20.686541 | orchestrator | Wednesday 25 March 2026 02:05:17 +0000 (0:00:01.403) 0:00:34.001 ******* 2026-03-25 02:05:20.686561 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:05:20.686580 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:05:20.686602 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:05:20.686622 | orchestrator | ok: [testbed-manager] 2026-03-25 02:05:20.686643 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:20.686664 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:20.686683 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:20.686695 | orchestrator | 2026-03-25 02:05:20.686706 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:05:20.686718 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:05:20.686730 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:05:20.686743 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:05:20.686754 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:05:20.686765 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:05:20.686776 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:05:20.686787 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:05:20.686797 | orchestrator | 2026-03-25 02:05:20.686808 | orchestrator | 2026-03-25 02:05:20.686819 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:05:20.686831 | orchestrator | Wednesday 25 March 2026 02:05:20 +0000 (0:00:03.528) 0:00:37.530 ******* 2026-03-25 02:05:20.686842 | orchestrator | =============================================================================== 2026-03-25 02:05:20.686853 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.08s 2026-03-25 02:05:20.686863 | orchestrator | Install required packages (Debian) -------------------------------------- 6.93s 2026-03-25 02:05:20.686874 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.53s 2026-03-25 02:05:20.686885 | orchestrator | Copy fact files --------------------------------------------------------- 3.37s 2026-03-25 02:05:20.686896 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.40s 2026-03-25 02:05:20.686907 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2026-03-25 02:05:20.686932 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2026-03-25 02:05:20.974464 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-03-25 02:05:20.974552 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-03-25 02:05:20.974579 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-03-25 02:05:20.974609 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-03-25 02:05:20.974616 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-03-25 02:05:20.974624 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.26s 2026-03-25 02:05:20.974632 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-03-25 02:05:20.974639 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.19s 2026-03-25 02:05:20.974648 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-03-25 02:05:20.974656 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-03-25 02:05:20.974663 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-25 02:05:21.380210 | orchestrator | + osism apply bootstrap 2026-03-25 02:05:33.639878 | orchestrator | 2026-03-25 02:05:33 | INFO  | Task 16a5c2a0-55cd-4ea1-a34d-41f83d0cd706 (bootstrap) was prepared for execution. 2026-03-25 02:05:33.640061 | orchestrator | 2026-03-25 02:05:33 | INFO  | It takes a moment until task 16a5c2a0-55cd-4ea1-a34d-41f83d0cd706 (bootstrap) has been started and output is visible here. 2026-03-25 02:05:50.631158 | orchestrator | 2026-03-25 02:05:50.631246 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-25 02:05:50.631256 | orchestrator | 2026-03-25 02:05:50.631263 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-25 02:05:50.631269 | orchestrator | Wednesday 25 March 2026 02:05:38 +0000 (0:00:00.176) 0:00:00.176 ******* 2026-03-25 02:05:50.631274 | orchestrator | ok: [testbed-manager] 2026-03-25 02:05:50.631281 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:50.631287 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:50.631292 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:50.631298 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:05:50.631303 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:05:50.631308 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:05:50.631314 | orchestrator | 2026-03-25 02:05:50.631320 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-25 02:05:50.631325 | orchestrator | 2026-03-25 02:05:50.631331 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-25 02:05:50.631340 | orchestrator | Wednesday 25 March 2026 02:05:38 +0000 (0:00:00.308) 0:00:00.485 ******* 2026-03-25 02:05:50.631350 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:05:50.631358 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:05:50.631364 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:05:50.631369 | orchestrator | ok: [testbed-manager] 2026-03-25 02:05:50.631374 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:50.631380 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:50.631385 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:50.631391 | orchestrator | 2026-03-25 02:05:50.631396 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-25 02:05:50.631402 | orchestrator | 2026-03-25 02:05:50.631407 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-25 02:05:50.631412 | orchestrator | Wednesday 25 March 2026 02:05:42 +0000 (0:00:03.550) 0:00:04.035 ******* 2026-03-25 02:05:50.631419 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-25 02:05:50.631425 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-25 02:05:50.631430 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-25 02:05:50.631436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-25 02:05:50.631441 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-25 02:05:50.631447 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-25 02:05:50.631452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:05:50.631458 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-25 02:05:50.631463 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-25 02:05:50.631487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:05:50.631492 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-25 02:05:50.631498 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-25 02:05:50.631503 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:05:50.631509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:05:50.631515 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 02:05:50.631520 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 02:05:50.631526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 02:05:50.631531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-25 02:05:50.631536 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 02:05:50.631542 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 02:05:50.631547 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 02:05:50.631552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 02:05:50.631557 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 02:05:50.631563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 02:05:50.631568 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 02:05:50.631573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 02:05:50.631579 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:05:50.631584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 02:05:50.631589 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 02:05:50.631595 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-25 02:05:50.631601 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-25 02:05:50.631606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 02:05:50.631612 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 02:05:50.631617 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:05:50.631622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 02:05:50.631628 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-25 02:05:50.631633 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 02:05:50.631638 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-25 02:05:50.631644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 02:05:50.631649 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-25 02:05:50.631654 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:05:50.631660 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 02:05:50.631665 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 02:05:50.631670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 02:05:50.631675 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:05:50.631681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 02:05:50.631686 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 02:05:50.631703 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 02:05:50.631710 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 02:05:50.631716 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 02:05:50.631723 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 02:05:50.631729 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 02:05:50.631736 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 02:05:50.631742 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:05:50.631754 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 02:05:50.631774 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:05:50.631780 | orchestrator | 2026-03-25 02:05:50.631787 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-25 02:05:50.631793 | orchestrator | 2026-03-25 02:05:50.631800 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-25 02:05:50.631806 | orchestrator | Wednesday 25 March 2026 02:05:42 +0000 (0:00:00.540) 0:00:04.576 ******* 2026-03-25 02:05:50.631812 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:05:50.631818 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:50.631825 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:05:50.631831 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:50.631837 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:05:50.631843 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:50.631849 | orchestrator | ok: [testbed-manager] 2026-03-25 02:05:50.631855 | orchestrator | 2026-03-25 02:05:50.631862 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-25 02:05:50.631868 | orchestrator | Wednesday 25 March 2026 02:05:44 +0000 (0:00:01.227) 0:00:05.803 ******* 2026-03-25 02:05:50.631874 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:05:50.631881 | orchestrator | ok: [testbed-manager] 2026-03-25 02:05:50.631887 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:05:50.631893 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:05:50.631899 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:05:50.631906 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:05:50.631912 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:05:50.631918 | orchestrator | 2026-03-25 02:05:50.631924 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-25 02:05:50.631930 | orchestrator | Wednesday 25 March 2026 02:05:45 +0000 (0:00:01.449) 0:00:07.252 ******* 2026-03-25 02:05:50.631937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:05:50.631947 | orchestrator | 2026-03-25 02:05:50.631958 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-25 02:05:50.631964 | orchestrator | Wednesday 25 March 2026 02:05:45 +0000 (0:00:00.306) 0:00:07.559 ******* 2026-03-25 02:05:50.631986 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:05:50.631993 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:05:50.632000 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:05:50.632006 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:05:50.632012 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:05:50.632018 | orchestrator | changed: [testbed-manager] 2026-03-25 02:05:50.632024 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:05:50.632029 | orchestrator | 2026-03-25 02:05:50.632035 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-25 02:05:50.632040 | orchestrator | Wednesday 25 March 2026 02:05:48 +0000 (0:00:02.229) 0:00:09.788 ******* 2026-03-25 02:05:50.632046 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:05:50.632053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:05:50.632060 | orchestrator | 2026-03-25 02:05:50.632066 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-25 02:05:50.632071 | orchestrator | Wednesday 25 March 2026 02:05:48 +0000 (0:00:00.313) 0:00:10.101 ******* 2026-03-25 02:05:50.632077 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:05:50.632082 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:05:50.632087 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:05:50.632093 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:05:50.632098 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:05:50.632104 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:05:50.632115 | orchestrator | 2026-03-25 02:05:50.632124 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-25 02:05:50.632130 | orchestrator | Wednesday 25 March 2026 02:05:49 +0000 (0:00:01.060) 0:00:11.162 ******* 2026-03-25 02:05:50.632135 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:05:50.632141 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:05:50.632146 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:05:50.632151 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:05:50.632156 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:05:50.632162 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:05:50.632167 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:05:50.632173 | orchestrator | 2026-03-25 02:05:50.632178 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-25 02:05:50.632184 | orchestrator | Wednesday 25 March 2026 02:05:50 +0000 (0:00:00.578) 0:00:11.741 ******* 2026-03-25 02:05:50.632189 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:05:50.632194 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:05:50.632200 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:05:50.632205 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:05:50.632210 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:05:50.632216 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:05:50.632221 | orchestrator | ok: [testbed-manager] 2026-03-25 02:05:50.632226 | orchestrator | 2026-03-25 02:05:50.632232 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-25 02:05:50.632238 | orchestrator | Wednesday 25 March 2026 02:05:50 +0000 (0:00:00.484) 0:00:12.226 ******* 2026-03-25 02:05:50.632244 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:05:50.632249 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:05:50.632258 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:06:03.404273 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:06:03.404398 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:06:03.404413 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:06:03.404424 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:06:03.404434 | orchestrator | 2026-03-25 02:06:03.404445 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-25 02:06:03.404456 | orchestrator | Wednesday 25 March 2026 02:05:50 +0000 (0:00:00.244) 0:00:12.470 ******* 2026-03-25 02:06:03.404468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:06:03.404495 | orchestrator | 2026-03-25 02:06:03.404506 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-25 02:06:03.404517 | orchestrator | Wednesday 25 March 2026 02:05:51 +0000 (0:00:00.354) 0:00:12.824 ******* 2026-03-25 02:06:03.404527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:06:03.404538 | orchestrator | 2026-03-25 02:06:03.404548 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-25 02:06:03.404558 | orchestrator | Wednesday 25 March 2026 02:05:51 +0000 (0:00:00.323) 0:00:13.148 ******* 2026-03-25 02:06:03.404568 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.404579 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:03.404589 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:03.404599 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:03.404609 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.404619 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.404629 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.404639 | orchestrator | 2026-03-25 02:06:03.404649 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-25 02:06:03.404659 | orchestrator | Wednesday 25 March 2026 02:05:52 +0000 (0:00:01.490) 0:00:14.639 ******* 2026-03-25 02:06:03.404693 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:06:03.404704 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:06:03.404714 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:06:03.404723 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:06:03.404733 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:06:03.404743 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:06:03.404752 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:06:03.404762 | orchestrator | 2026-03-25 02:06:03.404774 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-25 02:06:03.404785 | orchestrator | Wednesday 25 March 2026 02:05:53 +0000 (0:00:00.370) 0:00:15.009 ******* 2026-03-25 02:06:03.404797 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.404809 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.404820 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.404831 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.404843 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:03.404853 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:03.404865 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:03.404877 | orchestrator | 2026-03-25 02:06:03.404888 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-25 02:06:03.404900 | orchestrator | Wednesday 25 March 2026 02:05:53 +0000 (0:00:00.546) 0:00:15.555 ******* 2026-03-25 02:06:03.404912 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:06:03.404923 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:06:03.404935 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:06:03.404946 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:06:03.404957 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:06:03.405030 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:06:03.405047 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:06:03.405059 | orchestrator | 2026-03-25 02:06:03.405069 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-25 02:06:03.405081 | orchestrator | Wednesday 25 March 2026 02:05:54 +0000 (0:00:00.285) 0:00:15.841 ******* 2026-03-25 02:06:03.405091 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:06:03.405101 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.405111 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:06:03.405121 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:06:03.405130 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:03.405140 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:03.405160 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:03.405170 | orchestrator | 2026-03-25 02:06:03.405180 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-25 02:06:03.405190 | orchestrator | Wednesday 25 March 2026 02:05:54 +0000 (0:00:00.563) 0:00:16.405 ******* 2026-03-25 02:06:03.405201 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.405211 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:06:03.405220 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:06:03.405230 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:06:03.405240 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:03.405250 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:03.405260 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:03.405270 | orchestrator | 2026-03-25 02:06:03.405280 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-25 02:06:03.405290 | orchestrator | Wednesday 25 March 2026 02:05:55 +0000 (0:00:01.106) 0:00:17.512 ******* 2026-03-25 02:06:03.405305 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.405322 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.405342 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:03.405367 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.405382 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:03.405397 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:03.405411 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.405427 | orchestrator | 2026-03-25 02:06:03.405442 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-25 02:06:03.405468 | orchestrator | Wednesday 25 March 2026 02:05:56 +0000 (0:00:01.124) 0:00:18.636 ******* 2026-03-25 02:06:03.405508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:06:03.405526 | orchestrator | 2026-03-25 02:06:03.405542 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-25 02:06:03.405558 | orchestrator | Wednesday 25 March 2026 02:05:57 +0000 (0:00:00.361) 0:00:18.998 ******* 2026-03-25 02:06:03.405574 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:06:03.405590 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:03.405600 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:03.405610 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:03.405619 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:06:03.405629 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:06:03.405639 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:06:03.405648 | orchestrator | 2026-03-25 02:06:03.405658 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-25 02:06:03.405668 | orchestrator | Wednesday 25 March 2026 02:05:58 +0000 (0:00:01.325) 0:00:20.324 ******* 2026-03-25 02:06:03.405677 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.405687 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.405697 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.405706 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.405716 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:03.405726 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:03.405735 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:03.405745 | orchestrator | 2026-03-25 02:06:03.405755 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-25 02:06:03.405765 | orchestrator | Wednesday 25 March 2026 02:05:58 +0000 (0:00:00.259) 0:00:20.583 ******* 2026-03-25 02:06:03.405774 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.405784 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.405794 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.405803 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.405812 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:03.405822 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:03.405831 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:03.405841 | orchestrator | 2026-03-25 02:06:03.405851 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-25 02:06:03.405861 | orchestrator | Wednesday 25 March 2026 02:05:59 +0000 (0:00:00.272) 0:00:20.856 ******* 2026-03-25 02:06:03.405870 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.405880 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.405889 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.405899 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.405908 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:03.405918 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:03.405927 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:03.405936 | orchestrator | 2026-03-25 02:06:03.405946 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-25 02:06:03.405956 | orchestrator | Wednesday 25 March 2026 02:05:59 +0000 (0:00:00.274) 0:00:21.130 ******* 2026-03-25 02:06:03.405995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:06:03.406086 | orchestrator | 2026-03-25 02:06:03.406110 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-25 02:06:03.406128 | orchestrator | Wednesday 25 March 2026 02:05:59 +0000 (0:00:00.343) 0:00:21.474 ******* 2026-03-25 02:06:03.406144 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.406160 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.406190 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.406207 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.406223 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:03.406239 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:03.406256 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:03.406273 | orchestrator | 2026-03-25 02:06:03.406290 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-25 02:06:03.406306 | orchestrator | Wednesday 25 March 2026 02:06:00 +0000 (0:00:00.556) 0:00:22.031 ******* 2026-03-25 02:06:03.406322 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:06:03.406339 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:06:03.406356 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:06:03.406371 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:06:03.406388 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:06:03.406404 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:06:03.406421 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:06:03.406468 | orchestrator | 2026-03-25 02:06:03.406486 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-25 02:06:03.406504 | orchestrator | Wednesday 25 March 2026 02:06:00 +0000 (0:00:00.293) 0:00:22.324 ******* 2026-03-25 02:06:03.406519 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.406536 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.406552 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.406569 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.406586 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:03.406602 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:03.406618 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:03.406633 | orchestrator | 2026-03-25 02:06:03.406650 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-25 02:06:03.406665 | orchestrator | Wednesday 25 March 2026 02:06:01 +0000 (0:00:01.081) 0:00:23.406 ******* 2026-03-25 02:06:03.406680 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.406695 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.406709 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.406725 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.406741 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:03.406758 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:03.406774 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:03.406790 | orchestrator | 2026-03-25 02:06:03.406807 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-25 02:06:03.406823 | orchestrator | Wednesday 25 March 2026 02:06:02 +0000 (0:00:00.590) 0:00:23.996 ******* 2026-03-25 02:06:03.406839 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:03.406849 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:03.406858 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:03.406878 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:03.406900 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:44.552939 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:44.553076 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:44.553086 | orchestrator | 2026-03-25 02:06:44.553093 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-25 02:06:44.553101 | orchestrator | Wednesday 25 March 2026 02:06:03 +0000 (0:00:01.132) 0:00:25.129 ******* 2026-03-25 02:06:44.553106 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.553112 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.553118 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.553124 | orchestrator | changed: [testbed-manager] 2026-03-25 02:06:44.553130 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:44.553136 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:44.553141 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:44.553147 | orchestrator | 2026-03-25 02:06:44.553153 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-25 02:06:44.553158 | orchestrator | Wednesday 25 March 2026 02:06:18 +0000 (0:00:14.961) 0:00:40.090 ******* 2026-03-25 02:06:44.553164 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.553187 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.553193 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.553198 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.553204 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.553209 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.553214 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.553220 | orchestrator | 2026-03-25 02:06:44.553225 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-25 02:06:44.553231 | orchestrator | Wednesday 25 March 2026 02:06:18 +0000 (0:00:00.299) 0:00:40.390 ******* 2026-03-25 02:06:44.553236 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.553242 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.553247 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.553252 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.553257 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.553263 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.553268 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.553273 | orchestrator | 2026-03-25 02:06:44.553279 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-25 02:06:44.553284 | orchestrator | Wednesday 25 March 2026 02:06:18 +0000 (0:00:00.262) 0:00:40.652 ******* 2026-03-25 02:06:44.553290 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.553295 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.553300 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.553306 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.553311 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.553317 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.553323 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.553328 | orchestrator | 2026-03-25 02:06:44.553333 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-25 02:06:44.553339 | orchestrator | Wednesday 25 March 2026 02:06:19 +0000 (0:00:00.269) 0:00:40.922 ******* 2026-03-25 02:06:44.553346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:06:44.553354 | orchestrator | 2026-03-25 02:06:44.553359 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-25 02:06:44.553364 | orchestrator | Wednesday 25 March 2026 02:06:19 +0000 (0:00:00.357) 0:00:41.280 ******* 2026-03-25 02:06:44.553370 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.553375 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.553381 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.553386 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.553391 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.553397 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.553402 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.553407 | orchestrator | 2026-03-25 02:06:44.553413 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-25 02:06:44.553418 | orchestrator | Wednesday 25 March 2026 02:06:21 +0000 (0:00:01.588) 0:00:42.868 ******* 2026-03-25 02:06:44.553424 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:06:44.553429 | orchestrator | changed: [testbed-manager] 2026-03-25 02:06:44.553435 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:06:44.553440 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:06:44.553445 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:44.553451 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:44.553456 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:44.553461 | orchestrator | 2026-03-25 02:06:44.553467 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-25 02:06:44.553482 | orchestrator | Wednesday 25 March 2026 02:06:22 +0000 (0:00:01.038) 0:00:43.906 ******* 2026-03-25 02:06:44.553488 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.553494 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.553501 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.553513 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.553519 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.553525 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.553531 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.553537 | orchestrator | 2026-03-25 02:06:44.553543 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-25 02:06:44.553550 | orchestrator | Wednesday 25 March 2026 02:06:22 +0000 (0:00:00.788) 0:00:44.694 ******* 2026-03-25 02:06:44.553556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:06:44.553564 | orchestrator | 2026-03-25 02:06:44.553571 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-25 02:06:44.553578 | orchestrator | Wednesday 25 March 2026 02:06:23 +0000 (0:00:00.354) 0:00:45.049 ******* 2026-03-25 02:06:44.553584 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:06:44.553591 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:06:44.553597 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:06:44.553603 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:44.553609 | orchestrator | changed: [testbed-manager] 2026-03-25 02:06:44.553616 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:44.553622 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:44.553628 | orchestrator | 2026-03-25 02:06:44.553646 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-25 02:06:44.553653 | orchestrator | Wednesday 25 March 2026 02:06:24 +0000 (0:00:01.014) 0:00:46.064 ******* 2026-03-25 02:06:44.553659 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:06:44.553666 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:06:44.553672 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:06:44.553678 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:06:44.553684 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:06:44.553690 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:06:44.553700 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:06:44.553709 | orchestrator | 2026-03-25 02:06:44.553723 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-25 02:06:44.553735 | orchestrator | Wednesday 25 March 2026 02:06:24 +0000 (0:00:00.321) 0:00:46.386 ******* 2026-03-25 02:06:44.553744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:06:44.553754 | orchestrator | 2026-03-25 02:06:44.553763 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-25 02:06:44.553771 | orchestrator | Wednesday 25 March 2026 02:06:24 +0000 (0:00:00.349) 0:00:46.735 ******* 2026-03-25 02:06:44.553780 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.553787 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.553795 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.553803 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.553811 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.553819 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.553828 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.553836 | orchestrator | 2026-03-25 02:06:44.553844 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-25 02:06:44.553852 | orchestrator | Wednesday 25 March 2026 02:06:26 +0000 (0:00:01.522) 0:00:48.257 ******* 2026-03-25 02:06:44.553861 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:06:44.553869 | orchestrator | changed: [testbed-manager] 2026-03-25 02:06:44.553878 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:06:44.553887 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:06:44.553895 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:44.553904 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:44.553912 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:44.553927 | orchestrator | 2026-03-25 02:06:44.553936 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-25 02:06:44.553944 | orchestrator | Wednesday 25 March 2026 02:06:27 +0000 (0:00:01.113) 0:00:49.371 ******* 2026-03-25 02:06:44.553953 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:06:44.553961 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:06:44.553987 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:06:44.553996 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:06:44.554004 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:06:44.554073 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:06:44.554085 | orchestrator | changed: [testbed-manager] 2026-03-25 02:06:44.554094 | orchestrator | 2026-03-25 02:06:44.554103 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-25 02:06:44.554112 | orchestrator | Wednesday 25 March 2026 02:06:41 +0000 (0:00:13.835) 0:01:03.206 ******* 2026-03-25 02:06:44.554121 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.554130 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.554139 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.554148 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.554157 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.554165 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.554175 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.554181 | orchestrator | 2026-03-25 02:06:44.554187 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-25 02:06:44.554192 | orchestrator | Wednesday 25 March 2026 02:06:42 +0000 (0:00:01.219) 0:01:04.425 ******* 2026-03-25 02:06:44.554197 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.554203 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.554208 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.554213 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.554219 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.554224 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.554229 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.554234 | orchestrator | 2026-03-25 02:06:44.554240 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-25 02:06:44.554245 | orchestrator | Wednesday 25 March 2026 02:06:43 +0000 (0:00:00.879) 0:01:05.305 ******* 2026-03-25 02:06:44.554256 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.554262 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.554267 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.554273 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.554278 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.554283 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.554288 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.554293 | orchestrator | 2026-03-25 02:06:44.554299 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-25 02:06:44.554305 | orchestrator | Wednesday 25 March 2026 02:06:43 +0000 (0:00:00.289) 0:01:05.594 ******* 2026-03-25 02:06:44.554310 | orchestrator | ok: [testbed-manager] 2026-03-25 02:06:44.554316 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:06:44.554321 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:06:44.554326 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:06:44.554331 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:06:44.554336 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:06:44.554342 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:06:44.554347 | orchestrator | 2026-03-25 02:06:44.554352 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-25 02:06:44.554358 | orchestrator | Wednesday 25 March 2026 02:06:44 +0000 (0:00:00.310) 0:01:05.905 ******* 2026-03-25 02:06:44.554364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:06:44.554371 | orchestrator | 2026-03-25 02:06:44.554385 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-25 02:09:02.649373 | orchestrator | Wednesday 25 March 2026 02:06:44 +0000 (0:00:00.374) 0:01:06.280 ******* 2026-03-25 02:09:02.649479 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:02.649493 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:02.649503 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:02.649512 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:02.649521 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:02.649529 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:02.649538 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:02.649548 | orchestrator | 2026-03-25 02:09:02.649557 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-25 02:09:02.649567 | orchestrator | Wednesday 25 March 2026 02:06:46 +0000 (0:00:01.540) 0:01:07.820 ******* 2026-03-25 02:09:02.649576 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:09:02.649587 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:09:02.649596 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:09:02.649605 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:09:02.649614 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:09:02.649623 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:09:02.649632 | orchestrator | changed: [testbed-manager] 2026-03-25 02:09:02.649640 | orchestrator | 2026-03-25 02:09:02.649650 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-25 02:09:02.649660 | orchestrator | Wednesday 25 March 2026 02:06:46 +0000 (0:00:00.569) 0:01:08.390 ******* 2026-03-25 02:09:02.649668 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:02.649677 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:02.649686 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:02.649696 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:02.649704 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:02.649713 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:02.649722 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:02.649731 | orchestrator | 2026-03-25 02:09:02.649741 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-25 02:09:02.649750 | orchestrator | Wednesday 25 March 2026 02:06:46 +0000 (0:00:00.244) 0:01:08.635 ******* 2026-03-25 02:09:02.649759 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:02.649768 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:02.649777 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:02.649786 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:02.649795 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:02.649804 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:02.649812 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:02.649821 | orchestrator | 2026-03-25 02:09:02.649830 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-25 02:09:02.649839 | orchestrator | Wednesday 25 March 2026 02:06:48 +0000 (0:00:01.112) 0:01:09.747 ******* 2026-03-25 02:09:02.649848 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:09:02.649857 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:09:02.649866 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:09:02.649875 | orchestrator | changed: [testbed-manager] 2026-03-25 02:09:02.649884 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:09:02.649893 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:09:02.649902 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:09:02.649911 | orchestrator | 2026-03-25 02:09:02.649927 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-25 02:09:02.649937 | orchestrator | Wednesday 25 March 2026 02:06:49 +0000 (0:00:01.542) 0:01:11.290 ******* 2026-03-25 02:09:02.649948 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:02.649958 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:02.649996 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:02.650008 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:02.650065 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:02.650075 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:02.650084 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:02.650093 | orchestrator | 2026-03-25 02:09:02.650102 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-25 02:09:02.650133 | orchestrator | Wednesday 25 March 2026 02:06:51 +0000 (0:00:02.294) 0:01:13.585 ******* 2026-03-25 02:09:02.650142 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:02.650151 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:02.650160 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:02.650168 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:02.650177 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:02.650185 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:02.650194 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:02.650202 | orchestrator | 2026-03-25 02:09:02.650211 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-25 02:09:02.650220 | orchestrator | Wednesday 25 March 2026 02:07:30 +0000 (0:00:38.967) 0:01:52.552 ******* 2026-03-25 02:09:02.650228 | orchestrator | changed: [testbed-manager] 2026-03-25 02:09:02.650237 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:09:02.650246 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:09:02.650255 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:09:02.650264 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:09:02.650272 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:09:02.650281 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:09:02.650290 | orchestrator | 2026-03-25 02:09:02.650299 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-25 02:09:02.650308 | orchestrator | Wednesday 25 March 2026 02:08:45 +0000 (0:01:14.424) 0:03:06.976 ******* 2026-03-25 02:09:02.650317 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:02.650326 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:02.650334 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:02.650343 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:02.650352 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:02.650361 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:02.650369 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:02.650378 | orchestrator | 2026-03-25 02:09:02.650387 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-25 02:09:02.650396 | orchestrator | Wednesday 25 March 2026 02:08:46 +0000 (0:00:01.601) 0:03:08.578 ******* 2026-03-25 02:09:02.650405 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:02.650413 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:02.650422 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:02.650431 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:02.650439 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:02.650448 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:02.650456 | orchestrator | changed: [testbed-manager] 2026-03-25 02:09:02.650465 | orchestrator | 2026-03-25 02:09:02.650474 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-25 02:09:02.650483 | orchestrator | Wednesday 25 March 2026 02:09:00 +0000 (0:00:13.473) 0:03:22.052 ******* 2026-03-25 02:09:02.650521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-25 02:09:02.650549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-25 02:09:02.650569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-25 02:09:02.650580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-25 02:09:02.650589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-25 02:09:02.650598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-25 02:09:02.650607 | orchestrator | 2026-03-25 02:09:02.650616 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-25 02:09:02.650625 | orchestrator | Wednesday 25 March 2026 02:09:00 +0000 (0:00:00.463) 0:03:22.515 ******* 2026-03-25 02:09:02.650634 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-25 02:09:02.650643 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-25 02:09:02.650652 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:09:02.650661 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:09:02.650670 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-25 02:09:02.650683 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-25 02:09:02.650698 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:09:02.650713 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:09:02.650729 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-25 02:09:02.650744 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-25 02:09:02.650757 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-25 02:09:02.650772 | orchestrator | 2026-03-25 02:09:02.650786 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-25 02:09:02.650799 | orchestrator | Wednesday 25 March 2026 02:09:02 +0000 (0:00:01.774) 0:03:24.290 ******* 2026-03-25 02:09:02.650813 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-25 02:09:02.650827 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-25 02:09:02.650839 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-25 02:09:02.650855 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-25 02:09:02.650869 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-25 02:09:02.650893 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-25 02:09:06.960817 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-25 02:09:06.960909 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-25 02:09:06.960941 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-25 02:09:06.960951 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-25 02:09:06.960960 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-25 02:09:06.960998 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-25 02:09:06.961007 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-25 02:09:06.961015 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-25 02:09:06.961024 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:09:06.961034 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-25 02:09:06.961042 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-25 02:09:06.961050 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-25 02:09:06.961059 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-25 02:09:06.961067 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-25 02:09:06.961075 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-25 02:09:06.961083 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-25 02:09:06.961091 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-25 02:09:06.961099 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-25 02:09:06.961107 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-25 02:09:06.961115 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-25 02:09:06.961123 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-25 02:09:06.961131 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-25 02:09:06.961138 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-25 02:09:06.961146 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-25 02:09:06.961154 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-25 02:09:06.961162 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:09:06.961171 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:09:06.961179 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-25 02:09:06.961187 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-25 02:09:06.961195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-25 02:09:06.961202 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-25 02:09:06.961221 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-25 02:09:06.961230 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-25 02:09:06.961238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-25 02:09:06.961245 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-25 02:09:06.961254 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-25 02:09:06.961268 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-25 02:09:06.961276 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:09:06.961285 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-25 02:09:06.961293 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-25 02:09:06.961300 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-25 02:09:06.961308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-25 02:09:06.961316 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-25 02:09:06.961339 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-25 02:09:06.961347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-25 02:09:06.961355 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-25 02:09:06.961363 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-25 02:09:06.961373 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-25 02:09:06.961383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-25 02:09:06.961393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-25 02:09:06.961403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-25 02:09:06.961412 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-25 02:09:06.961422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-25 02:09:06.961431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-25 02:09:06.961440 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-25 02:09:06.961449 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-25 02:09:06.961459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-25 02:09:06.961468 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-25 02:09:06.961478 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-25 02:09:06.961487 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-25 02:09:06.961496 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-25 02:09:06.961505 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-25 02:09:06.961515 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-25 02:09:06.961524 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-25 02:09:06.961533 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-25 02:09:06.961542 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-25 02:09:06.961552 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-25 02:09:06.961562 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-25 02:09:06.961577 | orchestrator | 2026-03-25 02:09:06.961587 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-25 02:09:06.961596 | orchestrator | Wednesday 25 March 2026 02:09:05 +0000 (0:00:03.314) 0:03:27.604 ******* 2026-03-25 02:09:06.961604 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-25 02:09:06.961611 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-25 02:09:06.961619 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-25 02:09:06.961627 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-25 02:09:06.961639 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-25 02:09:06.961647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-25 02:09:06.961655 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-25 02:09:06.961662 | orchestrator | 2026-03-25 02:09:06.961670 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-25 02:09:06.961678 | orchestrator | Wednesday 25 March 2026 02:09:06 +0000 (0:00:00.609) 0:03:28.213 ******* 2026-03-25 02:09:06.961686 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-25 02:09:06.961694 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:09:06.961703 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-25 02:09:06.961710 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-25 02:09:06.961718 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:09:06.961727 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:09:06.961735 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-25 02:09:06.961743 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:09:06.961751 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-25 02:09:06.961759 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-25 02:09:06.961772 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-25 02:09:20.251628 | orchestrator | 2026-03-25 02:09:20.251728 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-25 02:09:20.251737 | orchestrator | Wednesday 25 March 2026 02:09:06 +0000 (0:00:00.473) 0:03:28.687 ******* 2026-03-25 02:09:20.251744 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-25 02:09:20.251752 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:09:20.251759 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-25 02:09:20.251765 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-25 02:09:20.251772 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:09:20.251778 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-25 02:09:20.251783 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:09:20.251789 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:09:20.251796 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-25 02:09:20.251802 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-25 02:09:20.251808 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-25 02:09:20.251813 | orchestrator | 2026-03-25 02:09:20.251819 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-25 02:09:20.251846 | orchestrator | Wednesday 25 March 2026 02:09:07 +0000 (0:00:00.591) 0:03:29.279 ******* 2026-03-25 02:09:20.251852 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-25 02:09:20.251858 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:09:20.251864 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-25 02:09:20.251870 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-25 02:09:20.251876 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:09:20.251881 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:09:20.251887 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-25 02:09:20.251892 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:09:20.251898 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-25 02:09:20.251904 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-25 02:09:20.251909 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-25 02:09:20.251915 | orchestrator | 2026-03-25 02:09:20.251921 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-25 02:09:20.251926 | orchestrator | Wednesday 25 March 2026 02:09:08 +0000 (0:00:00.592) 0:03:29.871 ******* 2026-03-25 02:09:20.251932 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:09:20.251938 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:09:20.251943 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:09:20.251949 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:09:20.251955 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:09:20.251962 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:09:20.252006 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:09:20.252012 | orchestrator | 2026-03-25 02:09:20.252018 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-25 02:09:20.252025 | orchestrator | Wednesday 25 March 2026 02:09:08 +0000 (0:00:00.318) 0:03:30.189 ******* 2026-03-25 02:09:20.252030 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:20.252037 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:20.252042 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:20.252048 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:20.252054 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:20.252059 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:20.252065 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:20.252070 | orchestrator | 2026-03-25 02:09:20.252076 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-25 02:09:20.252082 | orchestrator | Wednesday 25 March 2026 02:09:14 +0000 (0:00:05.594) 0:03:35.784 ******* 2026-03-25 02:09:20.252088 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-25 02:09:20.252094 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-25 02:09:20.252099 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:09:20.252105 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-25 02:09:20.252111 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:09:20.252117 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-25 02:09:20.252122 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:09:20.252128 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-25 02:09:20.252134 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:09:20.252140 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:09:20.252160 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-25 02:09:20.252165 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:09:20.252171 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-25 02:09:20.252177 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:09:20.252182 | orchestrator | 2026-03-25 02:09:20.252194 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-25 02:09:20.252200 | orchestrator | Wednesday 25 March 2026 02:09:14 +0000 (0:00:00.358) 0:03:36.143 ******* 2026-03-25 02:09:20.252206 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-25 02:09:20.252212 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-25 02:09:20.252218 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-25 02:09:20.252239 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-25 02:09:20.252246 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-25 02:09:20.252252 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-25 02:09:20.252257 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-25 02:09:20.252263 | orchestrator | 2026-03-25 02:09:20.252269 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-25 02:09:20.252275 | orchestrator | Wednesday 25 March 2026 02:09:15 +0000 (0:00:01.200) 0:03:37.343 ******* 2026-03-25 02:09:20.252283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:09:20.252290 | orchestrator | 2026-03-25 02:09:20.252296 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-25 02:09:20.252303 | orchestrator | Wednesday 25 March 2026 02:09:16 +0000 (0:00:00.447) 0:03:37.791 ******* 2026-03-25 02:09:20.252309 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:20.252314 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:20.252320 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:20.252325 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:20.252331 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:20.252337 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:20.252342 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:20.252347 | orchestrator | 2026-03-25 02:09:20.252353 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-25 02:09:20.252359 | orchestrator | Wednesday 25 March 2026 02:09:17 +0000 (0:00:01.281) 0:03:39.072 ******* 2026-03-25 02:09:20.252364 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:20.252370 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:20.252376 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:20.252381 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:20.252386 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:20.252392 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:20.252398 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:20.252403 | orchestrator | 2026-03-25 02:09:20.252409 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-25 02:09:20.252415 | orchestrator | Wednesday 25 March 2026 02:09:17 +0000 (0:00:00.640) 0:03:39.712 ******* 2026-03-25 02:09:20.252421 | orchestrator | changed: [testbed-manager] 2026-03-25 02:09:20.252427 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:09:20.252432 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:09:20.252438 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:09:20.252444 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:09:20.252450 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:09:20.252455 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:09:20.252460 | orchestrator | 2026-03-25 02:09:20.252466 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-25 02:09:20.252472 | orchestrator | Wednesday 25 March 2026 02:09:18 +0000 (0:00:00.662) 0:03:40.375 ******* 2026-03-25 02:09:20.252477 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:20.252483 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:20.252489 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:20.252495 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:20.252501 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:20.252506 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:20.252512 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:20.252518 | orchestrator | 2026-03-25 02:09:20.252524 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-25 02:09:20.252534 | orchestrator | Wednesday 25 March 2026 02:09:19 +0000 (0:00:00.627) 0:03:41.002 ******* 2026-03-25 02:09:20.252546 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774403153.43367, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:20.252554 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774403124.5199928, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:20.252560 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774403146.1427307, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:20.252580 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774403119.65384, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034734 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774403092.0106478, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034817 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774403124.9509323, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034826 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774403154.0800924, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034850 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034868 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034875 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034881 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034905 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034912 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034918 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 02:09:25.034931 | orchestrator | 2026-03-25 02:09:25.034939 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-25 02:09:25.034946 | orchestrator | Wednesday 25 March 2026 02:09:20 +0000 (0:00:00.973) 0:03:41.975 ******* 2026-03-25 02:09:25.034952 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:09:25.034959 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:09:25.034965 | orchestrator | changed: [testbed-manager] 2026-03-25 02:09:25.035121 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:09:25.035132 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:09:25.035138 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:09:25.035144 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:09:25.035150 | orchestrator | 2026-03-25 02:09:25.035156 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-25 02:09:25.035176 | orchestrator | Wednesday 25 March 2026 02:09:21 +0000 (0:00:01.044) 0:03:43.019 ******* 2026-03-25 02:09:25.035182 | orchestrator | changed: [testbed-manager] 2026-03-25 02:09:25.035187 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:09:25.035193 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:09:25.035206 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:09:25.035212 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:09:25.035218 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:09:25.035224 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:09:25.035230 | orchestrator | 2026-03-25 02:09:25.035242 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-25 02:09:25.035248 | orchestrator | Wednesday 25 March 2026 02:09:22 +0000 (0:00:01.111) 0:03:44.131 ******* 2026-03-25 02:09:25.035254 | orchestrator | changed: [testbed-manager] 2026-03-25 02:09:25.035259 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:09:25.035265 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:09:25.035271 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:09:25.035278 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:09:25.035284 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:09:25.035290 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:09:25.035296 | orchestrator | 2026-03-25 02:09:25.035302 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-25 02:09:25.035309 | orchestrator | Wednesday 25 March 2026 02:09:23 +0000 (0:00:01.159) 0:03:45.290 ******* 2026-03-25 02:09:25.035315 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:09:25.035321 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:09:25.035328 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:09:25.035334 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:09:25.035340 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:09:25.035346 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:09:25.035352 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:09:25.035359 | orchestrator | 2026-03-25 02:09:25.035365 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-25 02:09:25.035371 | orchestrator | Wednesday 25 March 2026 02:09:23 +0000 (0:00:00.312) 0:03:45.603 ******* 2026-03-25 02:09:25.035378 | orchestrator | ok: [testbed-manager] 2026-03-25 02:09:25.035385 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:09:25.035392 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:09:25.035398 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:09:25.035404 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:09:25.035410 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:09:25.035417 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:09:25.035423 | orchestrator | 2026-03-25 02:09:25.035429 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-25 02:09:25.035436 | orchestrator | Wednesday 25 March 2026 02:09:24 +0000 (0:00:00.728) 0:03:46.332 ******* 2026-03-25 02:09:25.035444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:09:25.035460 | orchestrator | 2026-03-25 02:09:25.035466 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-25 02:09:25.035482 | orchestrator | Wednesday 25 March 2026 02:09:25 +0000 (0:00:00.430) 0:03:46.762 ******* 2026-03-25 02:10:41.622210 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:41.622302 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:10:41.622312 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:10:41.622319 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:10:41.622325 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:10:41.622331 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:10:41.622337 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:10:41.622343 | orchestrator | 2026-03-25 02:10:41.622351 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-25 02:10:41.622358 | orchestrator | Wednesday 25 March 2026 02:09:32 +0000 (0:00:07.460) 0:03:54.223 ******* 2026-03-25 02:10:41.622364 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:41.622371 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:41.622377 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:41.622383 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:41.622389 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:41.622394 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:41.622400 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:41.622406 | orchestrator | 2026-03-25 02:10:41.622412 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-25 02:10:41.622418 | orchestrator | Wednesday 25 March 2026 02:09:33 +0000 (0:00:01.199) 0:03:55.423 ******* 2026-03-25 02:10:41.622424 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:41.622430 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:41.622436 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:41.622442 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:41.622447 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:41.622453 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:41.622459 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:41.622465 | orchestrator | 2026-03-25 02:10:41.622471 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-25 02:10:41.622477 | orchestrator | Wednesday 25 March 2026 02:09:34 +0000 (0:00:01.163) 0:03:56.586 ******* 2026-03-25 02:10:41.622483 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:41.622497 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:41.622503 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:41.622509 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:41.622515 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:41.622521 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:41.622527 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:41.622534 | orchestrator | 2026-03-25 02:10:41.622540 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-25 02:10:41.622547 | orchestrator | Wednesday 25 March 2026 02:09:35 +0000 (0:00:00.349) 0:03:56.936 ******* 2026-03-25 02:10:41.622554 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:41.622560 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:41.622566 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:41.622572 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:41.622579 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:41.622585 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:41.622591 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:41.622597 | orchestrator | 2026-03-25 02:10:41.622603 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-25 02:10:41.622610 | orchestrator | Wednesday 25 March 2026 02:09:35 +0000 (0:00:00.381) 0:03:57.317 ******* 2026-03-25 02:10:41.622616 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:41.622622 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:41.622628 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:41.622653 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:41.622660 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:41.622666 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:41.622673 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:41.622679 | orchestrator | 2026-03-25 02:10:41.622685 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-25 02:10:41.622691 | orchestrator | Wednesday 25 March 2026 02:09:35 +0000 (0:00:00.360) 0:03:57.678 ******* 2026-03-25 02:10:41.622697 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:41.622704 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:41.622710 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:41.622716 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:41.622722 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:41.622728 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:41.622734 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:41.622741 | orchestrator | 2026-03-25 02:10:41.622747 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-25 02:10:41.622753 | orchestrator | Wednesday 25 March 2026 02:09:41 +0000 (0:00:05.419) 0:04:03.097 ******* 2026-03-25 02:10:41.622761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:10:41.622770 | orchestrator | 2026-03-25 02:10:41.622776 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-25 02:10:41.622782 | orchestrator | Wednesday 25 March 2026 02:09:41 +0000 (0:00:00.528) 0:04:03.625 ******* 2026-03-25 02:10:41.622789 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-25 02:10:41.622795 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-25 02:10:41.622801 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-25 02:10:41.622807 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:10:41.622814 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-25 02:10:41.622834 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-25 02:10:41.622840 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-25 02:10:41.622847 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:10:41.622853 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-25 02:10:41.622859 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:10:41.622865 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-25 02:10:41.622871 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-25 02:10:41.622877 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-25 02:10:41.622884 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:10:41.622890 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-25 02:10:41.622896 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-25 02:10:41.622916 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:10:41.622923 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:10:41.622929 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-25 02:10:41.622935 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-25 02:10:41.622941 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:10:41.622948 | orchestrator | 2026-03-25 02:10:41.622954 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-25 02:10:41.622960 | orchestrator | Wednesday 25 March 2026 02:09:42 +0000 (0:00:00.397) 0:04:04.023 ******* 2026-03-25 02:10:41.622983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:10:41.622990 | orchestrator | 2026-03-25 02:10:41.622996 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-25 02:10:41.623008 | orchestrator | Wednesday 25 March 2026 02:09:42 +0000 (0:00:00.490) 0:04:04.513 ******* 2026-03-25 02:10:41.623014 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-25 02:10:41.623021 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-25 02:10:41.623027 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:10:41.623033 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:10:41.623040 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-25 02:10:41.623046 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:10:41.623052 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-25 02:10:41.623058 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:10:41.623064 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-25 02:10:41.623071 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-25 02:10:41.623077 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:10:41.623083 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:10:41.623089 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-25 02:10:41.623096 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:10:41.623102 | orchestrator | 2026-03-25 02:10:41.623108 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-25 02:10:41.623114 | orchestrator | Wednesday 25 March 2026 02:09:43 +0000 (0:00:00.394) 0:04:04.907 ******* 2026-03-25 02:10:41.623121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:10:41.623127 | orchestrator | 2026-03-25 02:10:41.623133 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-25 02:10:41.623139 | orchestrator | Wednesday 25 March 2026 02:09:43 +0000 (0:00:00.453) 0:04:05.361 ******* 2026-03-25 02:10:41.623146 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:10:41.623152 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:10:41.623158 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:10:41.623164 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:10:41.623178 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:10:41.623189 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:10:41.623198 | orchestrator | changed: [testbed-manager] 2026-03-25 02:10:41.623208 | orchestrator | 2026-03-25 02:10:41.623218 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-25 02:10:41.623228 | orchestrator | Wednesday 25 March 2026 02:10:18 +0000 (0:00:34.634) 0:04:39.995 ******* 2026-03-25 02:10:41.623238 | orchestrator | changed: [testbed-manager] 2026-03-25 02:10:41.623247 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:10:41.623253 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:10:41.623259 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:10:41.623265 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:10:41.623272 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:10:41.623278 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:10:41.623284 | orchestrator | 2026-03-25 02:10:41.623290 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-25 02:10:41.623297 | orchestrator | Wednesday 25 March 2026 02:10:26 +0000 (0:00:07.956) 0:04:47.952 ******* 2026-03-25 02:10:41.623303 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:10:41.623309 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:10:41.623315 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:10:41.623321 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:10:41.623327 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:10:41.623334 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:10:41.623340 | orchestrator | changed: [testbed-manager] 2026-03-25 02:10:41.623346 | orchestrator | 2026-03-25 02:10:41.623352 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-25 02:10:41.623364 | orchestrator | Wednesday 25 March 2026 02:10:34 +0000 (0:00:07.801) 0:04:55.754 ******* 2026-03-25 02:10:41.623370 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:41.623376 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:41.623383 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:41.623389 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:41.623395 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:41.623401 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:41.623408 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:41.623414 | orchestrator | 2026-03-25 02:10:41.623420 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-25 02:10:41.623427 | orchestrator | Wednesday 25 March 2026 02:10:35 +0000 (0:00:01.772) 0:04:57.526 ******* 2026-03-25 02:10:41.623433 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:10:41.623439 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:10:41.623445 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:10:41.623451 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:10:41.623458 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:10:41.623464 | orchestrator | changed: [testbed-manager] 2026-03-25 02:10:41.623470 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:10:41.623477 | orchestrator | 2026-03-25 02:10:41.623489 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-25 02:10:53.415644 | orchestrator | Wednesday 25 March 2026 02:10:41 +0000 (0:00:05.820) 0:05:03.346 ******* 2026-03-25 02:10:53.415722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:10:53.415730 | orchestrator | 2026-03-25 02:10:53.415736 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-25 02:10:53.415740 | orchestrator | Wednesday 25 March 2026 02:10:42 +0000 (0:00:00.449) 0:05:03.796 ******* 2026-03-25 02:10:53.415745 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:10:53.415751 | orchestrator | changed: [testbed-manager] 2026-03-25 02:10:53.415755 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:10:53.415759 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:10:53.415763 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:10:53.415767 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:10:53.415771 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:10:53.415775 | orchestrator | 2026-03-25 02:10:53.415780 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-25 02:10:53.415784 | orchestrator | Wednesday 25 March 2026 02:10:42 +0000 (0:00:00.721) 0:05:04.518 ******* 2026-03-25 02:10:53.415788 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:53.415792 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:53.415796 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:53.415800 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:53.415804 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:53.415808 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:53.415812 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:53.415816 | orchestrator | 2026-03-25 02:10:53.415820 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-25 02:10:53.415824 | orchestrator | Wednesday 25 March 2026 02:10:44 +0000 (0:00:01.823) 0:05:06.341 ******* 2026-03-25 02:10:53.415828 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:10:53.415832 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:10:53.415836 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:10:53.415840 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:10:53.415844 | orchestrator | changed: [testbed-manager] 2026-03-25 02:10:53.415848 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:10:53.415852 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:10:53.415856 | orchestrator | 2026-03-25 02:10:53.415860 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-25 02:10:53.415864 | orchestrator | Wednesday 25 March 2026 02:10:45 +0000 (0:00:00.815) 0:05:07.157 ******* 2026-03-25 02:10:53.415884 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:10:53.415888 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:10:53.415892 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:10:53.415896 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:10:53.415900 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:10:53.415904 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:10:53.415907 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:10:53.415911 | orchestrator | 2026-03-25 02:10:53.415915 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-25 02:10:53.415919 | orchestrator | Wednesday 25 March 2026 02:10:45 +0000 (0:00:00.309) 0:05:07.467 ******* 2026-03-25 02:10:53.415923 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:10:53.415927 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:10:53.415931 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:10:53.415944 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:10:53.415948 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:10:53.415952 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:10:53.415956 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:10:53.415960 | orchestrator | 2026-03-25 02:10:53.416000 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-25 02:10:53.416004 | orchestrator | Wednesday 25 March 2026 02:10:46 +0000 (0:00:00.455) 0:05:07.922 ******* 2026-03-25 02:10:53.416008 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:53.416011 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:53.416015 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:53.416019 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:53.416023 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:53.416026 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:53.416030 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:53.416034 | orchestrator | 2026-03-25 02:10:53.416038 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-25 02:10:53.416042 | orchestrator | Wednesday 25 March 2026 02:10:46 +0000 (0:00:00.352) 0:05:08.275 ******* 2026-03-25 02:10:53.416045 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:10:53.416049 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:10:53.416053 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:10:53.416057 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:10:53.416061 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:10:53.416064 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:10:53.416068 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:10:53.416072 | orchestrator | 2026-03-25 02:10:53.416076 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-25 02:10:53.416081 | orchestrator | Wednesday 25 March 2026 02:10:46 +0000 (0:00:00.343) 0:05:08.619 ******* 2026-03-25 02:10:53.416085 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:53.416089 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:53.416092 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:53.416096 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:53.416100 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:53.416104 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:53.416108 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:53.416112 | orchestrator | 2026-03-25 02:10:53.416116 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-25 02:10:53.416119 | orchestrator | Wednesday 25 March 2026 02:10:47 +0000 (0:00:00.379) 0:05:08.998 ******* 2026-03-25 02:10:53.416123 | orchestrator | ok: [testbed-manager] =>  2026-03-25 02:10:53.416127 | orchestrator |  docker_version: 5:27.5.1 2026-03-25 02:10:53.416131 | orchestrator | ok: [testbed-node-3] =>  2026-03-25 02:10:53.416134 | orchestrator |  docker_version: 5:27.5.1 2026-03-25 02:10:53.416138 | orchestrator | ok: [testbed-node-4] =>  2026-03-25 02:10:53.416142 | orchestrator |  docker_version: 5:27.5.1 2026-03-25 02:10:53.416146 | orchestrator | ok: [testbed-node-5] =>  2026-03-25 02:10:53.416149 | orchestrator |  docker_version: 5:27.5.1 2026-03-25 02:10:53.416165 | orchestrator | ok: [testbed-node-0] =>  2026-03-25 02:10:53.416176 | orchestrator |  docker_version: 5:27.5.1 2026-03-25 02:10:53.416180 | orchestrator | ok: [testbed-node-1] =>  2026-03-25 02:10:53.416184 | orchestrator |  docker_version: 5:27.5.1 2026-03-25 02:10:53.416188 | orchestrator | ok: [testbed-node-2] =>  2026-03-25 02:10:53.416192 | orchestrator |  docker_version: 5:27.5.1 2026-03-25 02:10:53.416195 | orchestrator | 2026-03-25 02:10:53.416199 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-25 02:10:53.416203 | orchestrator | Wednesday 25 March 2026 02:10:47 +0000 (0:00:00.350) 0:05:09.348 ******* 2026-03-25 02:10:53.416207 | orchestrator | ok: [testbed-manager] =>  2026-03-25 02:10:53.416211 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-25 02:10:53.416214 | orchestrator | ok: [testbed-node-3] =>  2026-03-25 02:10:53.416218 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-25 02:10:53.416222 | orchestrator | ok: [testbed-node-4] =>  2026-03-25 02:10:53.416226 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-25 02:10:53.416231 | orchestrator | ok: [testbed-node-5] =>  2026-03-25 02:10:53.416235 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-25 02:10:53.416239 | orchestrator | ok: [testbed-node-0] =>  2026-03-25 02:10:53.416243 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-25 02:10:53.416248 | orchestrator | ok: [testbed-node-1] =>  2026-03-25 02:10:53.416252 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-25 02:10:53.416256 | orchestrator | ok: [testbed-node-2] =>  2026-03-25 02:10:53.416260 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-25 02:10:53.416265 | orchestrator | 2026-03-25 02:10:53.416269 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-25 02:10:53.416274 | orchestrator | Wednesday 25 March 2026 02:10:47 +0000 (0:00:00.370) 0:05:09.718 ******* 2026-03-25 02:10:53.416278 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:10:53.416282 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:10:53.416287 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:10:53.416291 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:10:53.416295 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:10:53.416300 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:10:53.416304 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:10:53.416308 | orchestrator | 2026-03-25 02:10:53.416312 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-25 02:10:53.416317 | orchestrator | Wednesday 25 March 2026 02:10:48 +0000 (0:00:00.309) 0:05:10.028 ******* 2026-03-25 02:10:53.416321 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:10:53.416325 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:10:53.416330 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:10:53.416334 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:10:53.416338 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:10:53.416343 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:10:53.416347 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:10:53.416351 | orchestrator | 2026-03-25 02:10:53.416356 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-25 02:10:53.416360 | orchestrator | Wednesday 25 March 2026 02:10:48 +0000 (0:00:00.306) 0:05:10.335 ******* 2026-03-25 02:10:53.416366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:10:53.416372 | orchestrator | 2026-03-25 02:10:53.416379 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-25 02:10:53.416384 | orchestrator | Wednesday 25 March 2026 02:10:49 +0000 (0:00:00.490) 0:05:10.825 ******* 2026-03-25 02:10:53.416388 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:53.416393 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:53.416397 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:53.416401 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:53.416406 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:53.416414 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:53.416418 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:53.416422 | orchestrator | 2026-03-25 02:10:53.416427 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-25 02:10:53.416431 | orchestrator | Wednesday 25 March 2026 02:10:50 +0000 (0:00:00.964) 0:05:11.789 ******* 2026-03-25 02:10:53.416436 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:10:53.416440 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:10:53.416444 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:10:53.416449 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:10:53.416453 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:10:53.416457 | orchestrator | ok: [testbed-manager] 2026-03-25 02:10:53.416461 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:10:53.416466 | orchestrator | 2026-03-25 02:10:53.416470 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-25 02:10:53.416475 | orchestrator | Wednesday 25 March 2026 02:10:52 +0000 (0:00:02.944) 0:05:14.734 ******* 2026-03-25 02:10:53.416479 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-25 02:10:53.416484 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-25 02:10:53.416488 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-25 02:10:53.416493 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-25 02:10:53.416497 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-25 02:10:53.416501 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-25 02:10:53.416505 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:10:53.416510 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-25 02:10:53.416514 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-25 02:10:53.416518 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-25 02:10:53.416522 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:10:53.416527 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-25 02:10:53.416531 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-25 02:10:53.416535 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-25 02:10:53.416540 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:10:53.416544 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-25 02:10:53.416550 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-25 02:11:51.099028 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-25 02:11:51.099124 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:11:51.099136 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-25 02:11:51.099148 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-25 02:11:51.099160 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-25 02:11:51.099172 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:11:51.099183 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:11:51.099195 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-25 02:11:51.099206 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-25 02:11:51.099217 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-25 02:11:51.099227 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:11:51.099239 | orchestrator | 2026-03-25 02:11:51.099252 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-25 02:11:51.099266 | orchestrator | Wednesday 25 March 2026 02:10:53 +0000 (0:00:00.668) 0:05:15.403 ******* 2026-03-25 02:11:51.099279 | orchestrator | ok: [testbed-manager] 2026-03-25 02:11:51.099291 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.099302 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.099314 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.099326 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.099337 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.099373 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.099381 | orchestrator | 2026-03-25 02:11:51.099388 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-25 02:11:51.099395 | orchestrator | Wednesday 25 March 2026 02:11:00 +0000 (0:00:06.450) 0:05:21.854 ******* 2026-03-25 02:11:51.099402 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.099409 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.099415 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.099422 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.099428 | orchestrator | ok: [testbed-manager] 2026-03-25 02:11:51.099435 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.099442 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.099448 | orchestrator | 2026-03-25 02:11:51.099455 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-25 02:11:51.099463 | orchestrator | Wednesday 25 March 2026 02:11:01 +0000 (0:00:01.074) 0:05:22.929 ******* 2026-03-25 02:11:51.099475 | orchestrator | ok: [testbed-manager] 2026-03-25 02:11:51.099486 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.099497 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.099508 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.099519 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.099531 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.099542 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.099554 | orchestrator | 2026-03-25 02:11:51.099566 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-25 02:11:51.099577 | orchestrator | Wednesday 25 March 2026 02:11:09 +0000 (0:00:08.289) 0:05:31.219 ******* 2026-03-25 02:11:51.099589 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.099599 | orchestrator | changed: [testbed-manager] 2026-03-25 02:11:51.099607 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.099615 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.099623 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.099630 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.099639 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.099646 | orchestrator | 2026-03-25 02:11:51.099654 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-25 02:11:51.099662 | orchestrator | Wednesday 25 March 2026 02:11:12 +0000 (0:00:03.303) 0:05:34.522 ******* 2026-03-25 02:11:51.099670 | orchestrator | ok: [testbed-manager] 2026-03-25 02:11:51.099677 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.099685 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.099692 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.099700 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.099708 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.099715 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.099722 | orchestrator | 2026-03-25 02:11:51.099730 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-25 02:11:51.099737 | orchestrator | Wednesday 25 March 2026 02:11:14 +0000 (0:00:01.307) 0:05:35.830 ******* 2026-03-25 02:11:51.099745 | orchestrator | ok: [testbed-manager] 2026-03-25 02:11:51.099753 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.099760 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.099767 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.099775 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.099782 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.099790 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.099797 | orchestrator | 2026-03-25 02:11:51.099805 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-25 02:11:51.099813 | orchestrator | Wednesday 25 March 2026 02:11:15 +0000 (0:00:01.601) 0:05:37.431 ******* 2026-03-25 02:11:51.099820 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:11:51.099828 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:11:51.099835 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:11:51.099843 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:11:51.099858 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:11:51.099865 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:11:51.099873 | orchestrator | changed: [testbed-manager] 2026-03-25 02:11:51.099881 | orchestrator | 2026-03-25 02:11:51.099889 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-25 02:11:51.099896 | orchestrator | Wednesday 25 March 2026 02:11:16 +0000 (0:00:00.640) 0:05:38.072 ******* 2026-03-25 02:11:51.099903 | orchestrator | ok: [testbed-manager] 2026-03-25 02:11:51.099910 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.099916 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.099923 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.099929 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.099936 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.099942 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.099949 | orchestrator | 2026-03-25 02:11:51.099955 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-25 02:11:51.099996 | orchestrator | Wednesday 25 March 2026 02:11:25 +0000 (0:00:09.225) 0:05:47.297 ******* 2026-03-25 02:11:51.100004 | orchestrator | changed: [testbed-manager] 2026-03-25 02:11:51.100012 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.100018 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.100025 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.100031 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.100038 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.100044 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.100051 | orchestrator | 2026-03-25 02:11:51.100058 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-25 02:11:51.100065 | orchestrator | Wednesday 25 March 2026 02:11:26 +0000 (0:00:00.990) 0:05:48.288 ******* 2026-03-25 02:11:51.100071 | orchestrator | ok: [testbed-manager] 2026-03-25 02:11:51.100078 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.100085 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.100091 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.100098 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.100104 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.100111 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.100117 | orchestrator | 2026-03-25 02:11:51.100124 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-25 02:11:51.100131 | orchestrator | Wednesday 25 March 2026 02:11:34 +0000 (0:00:08.003) 0:05:56.292 ******* 2026-03-25 02:11:51.100137 | orchestrator | ok: [testbed-manager] 2026-03-25 02:11:51.100144 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.100150 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.100157 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.100164 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.100170 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.100177 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.100183 | orchestrator | 2026-03-25 02:11:51.100190 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-25 02:11:51.100197 | orchestrator | Wednesday 25 March 2026 02:11:44 +0000 (0:00:09.632) 0:06:05.924 ******* 2026-03-25 02:11:51.100203 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-25 02:11:51.100210 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-25 02:11:51.100217 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-25 02:11:51.100223 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-25 02:11:51.100230 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-25 02:11:51.100236 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-25 02:11:51.100243 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-25 02:11:51.100250 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-25 02:11:51.100256 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-25 02:11:51.100268 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-25 02:11:51.100275 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-25 02:11:51.100320 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-25 02:11:51.100328 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-25 02:11:51.100335 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-25 02:11:51.100342 | orchestrator | 2026-03-25 02:11:51.100348 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-25 02:11:51.100355 | orchestrator | Wednesday 25 March 2026 02:11:45 +0000 (0:00:01.179) 0:06:07.103 ******* 2026-03-25 02:11:51.100365 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:11:51.100372 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:11:51.100379 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:11:51.100385 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:11:51.100392 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:11:51.100398 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:11:51.100405 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:11:51.100411 | orchestrator | 2026-03-25 02:11:51.100418 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-25 02:11:51.100425 | orchestrator | Wednesday 25 March 2026 02:11:45 +0000 (0:00:00.551) 0:06:07.654 ******* 2026-03-25 02:11:51.100431 | orchestrator | ok: [testbed-manager] 2026-03-25 02:11:51.100438 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:11:51.100445 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:11:51.100451 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:11:51.100458 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:11:51.100465 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:11:51.100471 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:11:51.100478 | orchestrator | 2026-03-25 02:11:51.100485 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-25 02:11:51.100492 | orchestrator | Wednesday 25 March 2026 02:11:49 +0000 (0:00:04.081) 0:06:11.736 ******* 2026-03-25 02:11:51.100499 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:11:51.100506 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:11:51.100512 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:11:51.100519 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:11:51.100525 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:11:51.100532 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:11:51.100538 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:11:51.100545 | orchestrator | 2026-03-25 02:11:51.100552 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-25 02:11:51.100560 | orchestrator | Wednesday 25 March 2026 02:11:50 +0000 (0:00:00.562) 0:06:12.299 ******* 2026-03-25 02:11:51.100566 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-25 02:11:51.100573 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-25 02:11:51.100580 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:11:51.100586 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-25 02:11:51.100593 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-25 02:11:51.100599 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:11:51.100606 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-25 02:11:51.100613 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-25 02:11:51.100619 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:11:51.100631 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-25 02:12:10.989844 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-25 02:12:10.989958 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:12:10.990072 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-25 02:12:10.990087 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-25 02:12:10.990099 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:12:10.990140 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-25 02:12:10.990153 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-25 02:12:10.990164 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:12:10.990175 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-25 02:12:10.990186 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-25 02:12:10.990198 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:12:10.990210 | orchestrator | 2026-03-25 02:12:10.990224 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-25 02:12:10.990237 | orchestrator | Wednesday 25 March 2026 02:11:51 +0000 (0:00:00.837) 0:06:13.137 ******* 2026-03-25 02:12:10.990248 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:12:10.990259 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:12:10.990270 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:12:10.990281 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:12:10.990293 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:12:10.990305 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:12:10.990316 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:12:10.990327 | orchestrator | 2026-03-25 02:12:10.990338 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-25 02:12:10.990350 | orchestrator | Wednesday 25 March 2026 02:11:51 +0000 (0:00:00.555) 0:06:13.692 ******* 2026-03-25 02:12:10.990362 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:12:10.990373 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:12:10.990385 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:12:10.990397 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:12:10.990408 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:12:10.990419 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:12:10.990430 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:12:10.990442 | orchestrator | 2026-03-25 02:12:10.990453 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-25 02:12:10.990465 | orchestrator | Wednesday 25 March 2026 02:11:52 +0000 (0:00:00.571) 0:06:14.263 ******* 2026-03-25 02:12:10.990476 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:12:10.990488 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:12:10.990499 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:12:10.990511 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:12:10.990522 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:12:10.990533 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:12:10.990545 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:12:10.990556 | orchestrator | 2026-03-25 02:12:10.990568 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-25 02:12:10.990579 | orchestrator | Wednesday 25 March 2026 02:11:53 +0000 (0:00:00.560) 0:06:14.823 ******* 2026-03-25 02:12:10.990591 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:10.990600 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:10.990606 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:10.990612 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:10.990618 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:10.990625 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:10.990631 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:10.990637 | orchestrator | 2026-03-25 02:12:10.990643 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-25 02:12:10.990650 | orchestrator | Wednesday 25 March 2026 02:11:54 +0000 (0:00:01.882) 0:06:16.707 ******* 2026-03-25 02:12:10.990657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:12:10.990666 | orchestrator | 2026-03-25 02:12:10.990672 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-25 02:12:10.990679 | orchestrator | Wednesday 25 March 2026 02:11:55 +0000 (0:00:00.962) 0:06:17.669 ******* 2026-03-25 02:12:10.990696 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:10.990703 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:10.990711 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:10.990722 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:10.990732 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:10.990742 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:12:10.990752 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:10.990761 | orchestrator | 2026-03-25 02:12:10.990770 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-25 02:12:10.990780 | orchestrator | Wednesday 25 March 2026 02:11:56 +0000 (0:00:00.832) 0:06:18.502 ******* 2026-03-25 02:12:10.990789 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:10.990799 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:10.990809 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:10.990820 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:10.990830 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:10.990841 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:12:10.990852 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:10.990861 | orchestrator | 2026-03-25 02:12:10.990873 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-25 02:12:10.990879 | orchestrator | Wednesday 25 March 2026 02:11:57 +0000 (0:00:00.856) 0:06:19.359 ******* 2026-03-25 02:12:10.990886 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:10.990892 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:10.990898 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:10.990904 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:10.990910 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:10.990916 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:12:10.990922 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:10.990929 | orchestrator | 2026-03-25 02:12:10.990935 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-25 02:12:10.990957 | orchestrator | Wednesday 25 March 2026 02:11:59 +0000 (0:00:01.605) 0:06:20.964 ******* 2026-03-25 02:12:10.990963 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:12:10.991016 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:10.991025 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:10.991032 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:10.991038 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:10.991044 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:10.991050 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:10.991057 | orchestrator | 2026-03-25 02:12:10.991063 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-25 02:12:10.991069 | orchestrator | Wednesday 25 March 2026 02:12:00 +0000 (0:00:01.345) 0:06:22.309 ******* 2026-03-25 02:12:10.991075 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:10.991082 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:10.991088 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:10.991094 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:10.991100 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:10.991106 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:12:10.991112 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:10.991119 | orchestrator | 2026-03-25 02:12:10.991125 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-25 02:12:10.991131 | orchestrator | Wednesday 25 March 2026 02:12:01 +0000 (0:00:01.341) 0:06:23.651 ******* 2026-03-25 02:12:10.991137 | orchestrator | changed: [testbed-manager] 2026-03-25 02:12:10.991144 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:10.991150 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:10.991156 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:10.991162 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:10.991168 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:12:10.991174 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:10.991180 | orchestrator | 2026-03-25 02:12:10.991194 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-25 02:12:10.991200 | orchestrator | Wednesday 25 March 2026 02:12:03 +0000 (0:00:01.446) 0:06:25.098 ******* 2026-03-25 02:12:10.991206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:12:10.991213 | orchestrator | 2026-03-25 02:12:10.991220 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-25 02:12:10.991226 | orchestrator | Wednesday 25 March 2026 02:12:04 +0000 (0:00:01.160) 0:06:26.259 ******* 2026-03-25 02:12:10.991232 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:10.991238 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:10.991245 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:10.991251 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:10.991257 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:10.991263 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:10.991269 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:10.991276 | orchestrator | 2026-03-25 02:12:10.991282 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-25 02:12:10.991288 | orchestrator | Wednesday 25 March 2026 02:12:05 +0000 (0:00:01.351) 0:06:27.611 ******* 2026-03-25 02:12:10.991295 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:10.991301 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:10.991307 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:10.991313 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:10.991320 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:10.991339 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:10.991345 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:10.991352 | orchestrator | 2026-03-25 02:12:10.991358 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-25 02:12:10.991364 | orchestrator | Wednesday 25 March 2026 02:12:07 +0000 (0:00:01.167) 0:06:28.778 ******* 2026-03-25 02:12:10.991370 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:10.991377 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:10.991383 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:10.991389 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:10.991395 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:10.991401 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:10.991407 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:10.991414 | orchestrator | 2026-03-25 02:12:10.991420 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-25 02:12:10.991426 | orchestrator | Wednesday 25 March 2026 02:12:08 +0000 (0:00:01.202) 0:06:29.980 ******* 2026-03-25 02:12:10.991432 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:10.991439 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:10.991445 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:10.991451 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:10.991457 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:10.991463 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:10.991469 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:10.991475 | orchestrator | 2026-03-25 02:12:10.991482 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-25 02:12:10.991488 | orchestrator | Wednesday 25 March 2026 02:12:09 +0000 (0:00:01.368) 0:06:31.349 ******* 2026-03-25 02:12:10.991494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:12:10.991500 | orchestrator | 2026-03-25 02:12:10.991507 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-25 02:12:10.991513 | orchestrator | Wednesday 25 March 2026 02:12:10 +0000 (0:00:00.999) 0:06:32.349 ******* 2026-03-25 02:12:10.991519 | orchestrator | 2026-03-25 02:12:10.991526 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-25 02:12:10.991536 | orchestrator | Wednesday 25 March 2026 02:12:10 +0000 (0:00:00.045) 0:06:32.395 ******* 2026-03-25 02:12:10.991543 | orchestrator | 2026-03-25 02:12:10.991549 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-25 02:12:10.991555 | orchestrator | Wednesday 25 March 2026 02:12:10 +0000 (0:00:00.051) 0:06:32.446 ******* 2026-03-25 02:12:10.991562 | orchestrator | 2026-03-25 02:12:10.991568 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-25 02:12:10.991579 | orchestrator | Wednesday 25 March 2026 02:12:10 +0000 (0:00:00.044) 0:06:32.491 ******* 2026-03-25 02:12:37.123555 | orchestrator | 2026-03-25 02:12:37.123709 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-25 02:12:37.123746 | orchestrator | Wednesday 25 March 2026 02:12:10 +0000 (0:00:00.044) 0:06:32.535 ******* 2026-03-25 02:12:37.123827 | orchestrator | 2026-03-25 02:12:37.123851 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-25 02:12:37.123869 | orchestrator | Wednesday 25 March 2026 02:12:10 +0000 (0:00:00.055) 0:06:32.590 ******* 2026-03-25 02:12:37.123886 | orchestrator | 2026-03-25 02:12:37.123904 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-25 02:12:37.123921 | orchestrator | Wednesday 25 March 2026 02:12:10 +0000 (0:00:00.066) 0:06:32.656 ******* 2026-03-25 02:12:37.123938 | orchestrator | 2026-03-25 02:12:37.123958 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-25 02:12:37.123977 | orchestrator | Wednesday 25 March 2026 02:12:10 +0000 (0:00:00.046) 0:06:32.703 ******* 2026-03-25 02:12:37.123994 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:37.124088 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:37.124108 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:37.124126 | orchestrator | 2026-03-25 02:12:37.124146 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-25 02:12:37.124164 | orchestrator | Wednesday 25 March 2026 02:12:12 +0000 (0:00:01.100) 0:06:33.804 ******* 2026-03-25 02:12:37.124183 | orchestrator | changed: [testbed-manager] 2026-03-25 02:12:37.124202 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:37.124222 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:37.124241 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:37.124259 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:37.124270 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:37.124281 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:12:37.124292 | orchestrator | 2026-03-25 02:12:37.124304 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-25 02:12:37.124315 | orchestrator | Wednesday 25 March 2026 02:12:13 +0000 (0:00:01.582) 0:06:35.386 ******* 2026-03-25 02:12:37.124326 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:37.124337 | orchestrator | changed: [testbed-manager] 2026-03-25 02:12:37.124348 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:37.124359 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:37.124370 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:37.124384 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:12:37.124403 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:37.124420 | orchestrator | 2026-03-25 02:12:37.124438 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-25 02:12:37.124456 | orchestrator | Wednesday 25 March 2026 02:12:14 +0000 (0:00:01.181) 0:06:36.568 ******* 2026-03-25 02:12:37.124475 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:12:37.124492 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:37.124510 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:37.124526 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:37.124545 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:37.124562 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:37.124581 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:12:37.124600 | orchestrator | 2026-03-25 02:12:37.124618 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-25 02:12:37.124636 | orchestrator | Wednesday 25 March 2026 02:12:17 +0000 (0:00:02.416) 0:06:38.985 ******* 2026-03-25 02:12:37.124705 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:12:37.124726 | orchestrator | 2026-03-25 02:12:37.124760 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-25 02:12:37.124779 | orchestrator | Wednesday 25 March 2026 02:12:17 +0000 (0:00:00.114) 0:06:39.099 ******* 2026-03-25 02:12:37.124798 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:37.124816 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:37.124834 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:37.124852 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:37.124870 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:37.124887 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:12:37.124905 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:37.124923 | orchestrator | 2026-03-25 02:12:37.124941 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-25 02:12:37.124960 | orchestrator | Wednesday 25 March 2026 02:12:18 +0000 (0:00:00.986) 0:06:40.086 ******* 2026-03-25 02:12:37.124979 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:12:37.124997 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:12:37.125045 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:12:37.125062 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:12:37.125079 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:12:37.125095 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:12:37.125111 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:12:37.125130 | orchestrator | 2026-03-25 02:12:37.125147 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-25 02:12:37.125163 | orchestrator | Wednesday 25 March 2026 02:12:18 +0000 (0:00:00.578) 0:06:40.664 ******* 2026-03-25 02:12:37.125182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:12:37.125201 | orchestrator | 2026-03-25 02:12:37.125217 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-25 02:12:37.125232 | orchestrator | Wednesday 25 March 2026 02:12:20 +0000 (0:00:01.168) 0:06:41.832 ******* 2026-03-25 02:12:37.125248 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:37.125263 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:37.125279 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:37.125295 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:37.125315 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:37.125336 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:37.125353 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:37.125370 | orchestrator | 2026-03-25 02:12:37.125387 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-25 02:12:37.125404 | orchestrator | Wednesday 25 March 2026 02:12:20 +0000 (0:00:00.863) 0:06:42.696 ******* 2026-03-25 02:12:37.125421 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-25 02:12:37.125472 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-25 02:12:37.125491 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-25 02:12:37.125508 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-25 02:12:37.125526 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-25 02:12:37.125543 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-25 02:12:37.125560 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-25 02:12:37.125577 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-25 02:12:37.125594 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-25 02:12:37.125611 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-25 02:12:37.125628 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-25 02:12:37.125646 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-25 02:12:37.125682 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-25 02:12:37.125699 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-25 02:12:37.125716 | orchestrator | 2026-03-25 02:12:37.125733 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-25 02:12:37.125751 | orchestrator | Wednesday 25 March 2026 02:12:23 +0000 (0:00:02.506) 0:06:45.202 ******* 2026-03-25 02:12:37.125768 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:12:37.125785 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:12:37.125801 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:12:37.125819 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:12:37.125835 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:12:37.125852 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:12:37.125869 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:12:37.125885 | orchestrator | 2026-03-25 02:12:37.125902 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-25 02:12:37.125918 | orchestrator | Wednesday 25 March 2026 02:12:24 +0000 (0:00:00.768) 0:06:45.971 ******* 2026-03-25 02:12:37.125937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:12:37.125957 | orchestrator | 2026-03-25 02:12:37.125976 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-25 02:12:37.125995 | orchestrator | Wednesday 25 March 2026 02:12:25 +0000 (0:00:00.940) 0:06:46.912 ******* 2026-03-25 02:12:37.126111 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:37.126132 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:37.126150 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:37.126169 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:37.126187 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:37.126206 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:37.126224 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:37.126242 | orchestrator | 2026-03-25 02:12:37.126260 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-25 02:12:37.126279 | orchestrator | Wednesday 25 March 2026 02:12:26 +0000 (0:00:00.862) 0:06:47.775 ******* 2026-03-25 02:12:37.126309 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:37.126327 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:37.126345 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:37.126364 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:37.126379 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:37.126397 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:37.126415 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:37.126432 | orchestrator | 2026-03-25 02:12:37.126451 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-25 02:12:37.126469 | orchestrator | Wednesday 25 March 2026 02:12:27 +0000 (0:00:01.063) 0:06:48.839 ******* 2026-03-25 02:12:37.126487 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:12:37.126504 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:12:37.126523 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:12:37.126541 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:12:37.126560 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:12:37.126575 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:12:37.126594 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:12:37.126612 | orchestrator | 2026-03-25 02:12:37.126629 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-25 02:12:37.126646 | orchestrator | Wednesday 25 March 2026 02:12:27 +0000 (0:00:00.578) 0:06:49.418 ******* 2026-03-25 02:12:37.126664 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:37.126681 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:12:37.126696 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:12:37.126711 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:12:37.126730 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:12:37.126764 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:12:37.126784 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:12:37.126803 | orchestrator | 2026-03-25 02:12:37.126821 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-25 02:12:37.126840 | orchestrator | Wednesday 25 March 2026 02:12:29 +0000 (0:00:01.397) 0:06:50.815 ******* 2026-03-25 02:12:37.126858 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:12:37.126878 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:12:37.126896 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:12:37.126914 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:12:37.126933 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:12:37.126951 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:12:37.126969 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:12:37.126985 | orchestrator | 2026-03-25 02:12:37.126996 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-25 02:12:37.127030 | orchestrator | Wednesday 25 March 2026 02:12:29 +0000 (0:00:00.538) 0:06:51.354 ******* 2026-03-25 02:12:37.127041 | orchestrator | ok: [testbed-manager] 2026-03-25 02:12:37.127051 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:12:37.127060 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:12:37.127070 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:12:37.127080 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:12:37.127090 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:12:37.127114 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:10.206825 | orchestrator | 2026-03-25 02:13:10.206976 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-25 02:13:10.207006 | orchestrator | Wednesday 25 March 2026 02:12:37 +0000 (0:00:07.490) 0:06:58.844 ******* 2026-03-25 02:13:10.207025 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.207038 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:10.207101 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:10.207113 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:10.207124 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:10.207135 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:10.207147 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:10.207158 | orchestrator | 2026-03-25 02:13:10.207169 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-25 02:13:10.207181 | orchestrator | Wednesday 25 March 2026 02:12:38 +0000 (0:00:01.574) 0:07:00.419 ******* 2026-03-25 02:13:10.207192 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.207203 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:10.207214 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:10.207225 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:10.207235 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:10.207246 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:10.207257 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:10.207268 | orchestrator | 2026-03-25 02:13:10.207279 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-25 02:13:10.207290 | orchestrator | Wednesday 25 March 2026 02:12:40 +0000 (0:00:01.679) 0:07:02.099 ******* 2026-03-25 02:13:10.207301 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.207312 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:10.207323 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:10.207334 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:10.207346 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:10.207359 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:10.207371 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:10.207384 | orchestrator | 2026-03-25 02:13:10.207396 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-25 02:13:10.207408 | orchestrator | Wednesday 25 March 2026 02:12:42 +0000 (0:00:01.696) 0:07:03.795 ******* 2026-03-25 02:13:10.207421 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.207433 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:10.207446 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:10.207485 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:10.207498 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:10.207510 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:10.207522 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:10.207534 | orchestrator | 2026-03-25 02:13:10.207547 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-25 02:13:10.207559 | orchestrator | Wednesday 25 March 2026 02:12:42 +0000 (0:00:00.893) 0:07:04.689 ******* 2026-03-25 02:13:10.207571 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:13:10.207584 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:13:10.207597 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:13:10.207610 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:13:10.207622 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:13:10.207634 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:13:10.207645 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:13:10.207656 | orchestrator | 2026-03-25 02:13:10.207667 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-25 02:13:10.207678 | orchestrator | Wednesday 25 March 2026 02:12:44 +0000 (0:00:01.137) 0:07:05.826 ******* 2026-03-25 02:13:10.207689 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:13:10.207700 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:13:10.207711 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:13:10.207722 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:13:10.207732 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:13:10.207743 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:13:10.207754 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:13:10.207764 | orchestrator | 2026-03-25 02:13:10.207775 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-25 02:13:10.207786 | orchestrator | Wednesday 25 March 2026 02:12:44 +0000 (0:00:00.593) 0:07:06.420 ******* 2026-03-25 02:13:10.207797 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.207824 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:10.207836 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:10.207847 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:10.207858 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:10.207868 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:10.207879 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:10.207890 | orchestrator | 2026-03-25 02:13:10.207901 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-25 02:13:10.207912 | orchestrator | Wednesday 25 March 2026 02:12:45 +0000 (0:00:00.572) 0:07:06.993 ******* 2026-03-25 02:13:10.207922 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.207933 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:10.207944 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:10.207956 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:10.207966 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:10.207977 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:10.207988 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:10.207999 | orchestrator | 2026-03-25 02:13:10.208010 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-25 02:13:10.208021 | orchestrator | Wednesday 25 March 2026 02:12:45 +0000 (0:00:00.712) 0:07:07.705 ******* 2026-03-25 02:13:10.208032 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.208043 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:10.208075 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:10.208086 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:10.208096 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:10.208107 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:10.208118 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:10.208128 | orchestrator | 2026-03-25 02:13:10.208139 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-25 02:13:10.208150 | orchestrator | Wednesday 25 March 2026 02:12:46 +0000 (0:00:00.795) 0:07:08.501 ******* 2026-03-25 02:13:10.208161 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.208172 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:10.208192 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:10.208202 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:10.208213 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:10.208224 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:10.208235 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:10.208245 | orchestrator | 2026-03-25 02:13:10.208276 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-25 02:13:10.208288 | orchestrator | Wednesday 25 March 2026 02:12:52 +0000 (0:00:06.018) 0:07:14.519 ******* 2026-03-25 02:13:10.208299 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:13:10.208310 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:13:10.208321 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:13:10.208332 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:13:10.208342 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:13:10.208353 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:13:10.208364 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:13:10.208375 | orchestrator | 2026-03-25 02:13:10.208386 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-25 02:13:10.208397 | orchestrator | Wednesday 25 March 2026 02:12:53 +0000 (0:00:00.564) 0:07:15.083 ******* 2026-03-25 02:13:10.208410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:13:10.208423 | orchestrator | 2026-03-25 02:13:10.208434 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-25 02:13:10.208445 | orchestrator | Wednesday 25 March 2026 02:12:54 +0000 (0:00:01.112) 0:07:16.195 ******* 2026-03-25 02:13:10.208456 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:10.208467 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.208478 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:10.208488 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:10.208499 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:10.208510 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:10.208521 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:10.208532 | orchestrator | 2026-03-25 02:13:10.208543 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-25 02:13:10.208554 | orchestrator | Wednesday 25 March 2026 02:12:56 +0000 (0:00:01.828) 0:07:18.024 ******* 2026-03-25 02:13:10.208564 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.208576 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:10.208586 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:10.208597 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:10.208608 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:10.208618 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:10.208629 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:10.208640 | orchestrator | 2026-03-25 02:13:10.208651 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-25 02:13:10.208662 | orchestrator | Wednesday 25 March 2026 02:12:57 +0000 (0:00:01.150) 0:07:19.175 ******* 2026-03-25 02:13:10.208672 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:10.208683 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:10.208694 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:10.208704 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:10.208715 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:10.208726 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:10.208737 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:10.208748 | orchestrator | 2026-03-25 02:13:10.208758 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-25 02:13:10.208769 | orchestrator | Wednesday 25 March 2026 02:12:58 +0000 (0:00:00.829) 0:07:20.004 ******* 2026-03-25 02:13:10.208786 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-25 02:13:10.208798 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-25 02:13:10.208816 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-25 02:13:10.208828 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-25 02:13:10.208839 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-25 02:13:10.208850 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-25 02:13:10.208860 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-25 02:13:10.208871 | orchestrator | 2026-03-25 02:13:10.208882 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-25 02:13:10.208893 | orchestrator | Wednesday 25 March 2026 02:13:00 +0000 (0:00:01.907) 0:07:21.912 ******* 2026-03-25 02:13:10.208904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:13:10.208915 | orchestrator | 2026-03-25 02:13:10.208926 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-25 02:13:10.208937 | orchestrator | Wednesday 25 March 2026 02:13:01 +0000 (0:00:00.873) 0:07:22.785 ******* 2026-03-25 02:13:10.208948 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:10.208959 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:10.208970 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:10.208981 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:10.208992 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:10.209003 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:10.209014 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:10.209025 | orchestrator | 2026-03-25 02:13:10.209042 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-25 02:13:42.082795 | orchestrator | Wednesday 25 March 2026 02:13:10 +0000 (0:00:09.143) 0:07:31.929 ******* 2026-03-25 02:13:42.082912 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:42.082929 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:42.082941 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:42.082952 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:42.082963 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:42.082974 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:42.082985 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:42.082996 | orchestrator | 2026-03-25 02:13:42.083008 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-25 02:13:42.083019 | orchestrator | Wednesday 25 March 2026 02:13:12 +0000 (0:00:02.075) 0:07:34.005 ******* 2026-03-25 02:13:42.083031 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:42.083042 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:42.083053 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:42.083063 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:42.083074 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:42.083116 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:42.083129 | orchestrator | 2026-03-25 02:13:42.083140 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-25 02:13:42.083151 | orchestrator | Wednesday 25 March 2026 02:13:13 +0000 (0:00:01.308) 0:07:35.313 ******* 2026-03-25 02:13:42.083162 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:42.083175 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:42.083186 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:42.083197 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:42.083208 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:42.083245 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:42.083256 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:42.083267 | orchestrator | 2026-03-25 02:13:42.083278 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-25 02:13:42.083289 | orchestrator | 2026-03-25 02:13:42.083300 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-25 02:13:42.083311 | orchestrator | Wednesday 25 March 2026 02:13:14 +0000 (0:00:01.240) 0:07:36.553 ******* 2026-03-25 02:13:42.083324 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:13:42.083337 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:13:42.083350 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:13:42.083362 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:13:42.083374 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:13:42.083387 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:13:42.083399 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:13:42.083411 | orchestrator | 2026-03-25 02:13:42.083424 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-25 02:13:42.083436 | orchestrator | 2026-03-25 02:13:42.083450 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-25 02:13:42.083497 | orchestrator | Wednesday 25 March 2026 02:13:15 +0000 (0:00:00.791) 0:07:37.344 ******* 2026-03-25 02:13:42.083521 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:42.083533 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:42.083543 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:42.083555 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:42.083566 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:42.083588 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:42.083599 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:42.083610 | orchestrator | 2026-03-25 02:13:42.083621 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-25 02:13:42.083648 | orchestrator | Wednesday 25 March 2026 02:13:16 +0000 (0:00:01.365) 0:07:38.710 ******* 2026-03-25 02:13:42.083659 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:42.083670 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:42.083681 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:42.083692 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:42.083703 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:42.083714 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:42.083725 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:42.083736 | orchestrator | 2026-03-25 02:13:42.083748 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-25 02:13:42.083759 | orchestrator | Wednesday 25 March 2026 02:13:18 +0000 (0:00:01.436) 0:07:40.147 ******* 2026-03-25 02:13:42.083770 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:13:42.083781 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:13:42.083791 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:13:42.083802 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:13:42.083813 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:13:42.083825 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:13:42.083835 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:13:42.083846 | orchestrator | 2026-03-25 02:13:42.083858 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-25 02:13:42.083869 | orchestrator | Wednesday 25 March 2026 02:13:18 +0000 (0:00:00.553) 0:07:40.701 ******* 2026-03-25 02:13:42.083881 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:13:42.083894 | orchestrator | 2026-03-25 02:13:42.083905 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-25 02:13:42.083916 | orchestrator | Wednesday 25 March 2026 02:13:20 +0000 (0:00:01.127) 0:07:41.828 ******* 2026-03-25 02:13:42.083930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:13:42.083953 | orchestrator | 2026-03-25 02:13:42.083963 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-25 02:13:42.083975 | orchestrator | Wednesday 25 March 2026 02:13:21 +0000 (0:00:00.944) 0:07:42.773 ******* 2026-03-25 02:13:42.083985 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:42.083997 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:42.084008 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:42.084018 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:42.084029 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:42.084040 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:42.084053 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:42.084071 | orchestrator | 2026-03-25 02:13:42.084161 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-25 02:13:42.084182 | orchestrator | Wednesday 25 March 2026 02:13:29 +0000 (0:00:08.516) 0:07:51.289 ******* 2026-03-25 02:13:42.084202 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:42.084221 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:42.084241 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:42.084262 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:42.084282 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:42.084302 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:42.084317 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:42.084328 | orchestrator | 2026-03-25 02:13:42.084339 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-25 02:13:42.084350 | orchestrator | Wednesday 25 March 2026 02:13:30 +0000 (0:00:01.116) 0:07:52.406 ******* 2026-03-25 02:13:42.084361 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:42.084371 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:42.084382 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:42.084393 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:42.084403 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:42.084414 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:42.084425 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:42.084435 | orchestrator | 2026-03-25 02:13:42.084446 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-25 02:13:42.084457 | orchestrator | Wednesday 25 March 2026 02:13:32 +0000 (0:00:01.376) 0:07:53.782 ******* 2026-03-25 02:13:42.084468 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:42.084479 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:42.084489 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:42.084500 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:42.084511 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:42.084522 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:42.084539 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:42.084566 | orchestrator | 2026-03-25 02:13:42.084584 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-25 02:13:42.084602 | orchestrator | Wednesday 25 March 2026 02:13:34 +0000 (0:00:02.213) 0:07:55.995 ******* 2026-03-25 02:13:42.084619 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:42.084636 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:42.084651 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:42.084668 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:42.084685 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:42.084703 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:42.084720 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:42.084739 | orchestrator | 2026-03-25 02:13:42.084757 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-25 02:13:42.084776 | orchestrator | Wednesday 25 March 2026 02:13:35 +0000 (0:00:01.320) 0:07:57.316 ******* 2026-03-25 02:13:42.084794 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:42.084812 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:42.084844 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:42.084855 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:42.084866 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:42.084877 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:42.084887 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:42.084898 | orchestrator | 2026-03-25 02:13:42.084909 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-25 02:13:42.084920 | orchestrator | 2026-03-25 02:13:42.084940 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-25 02:13:42.084951 | orchestrator | Wednesday 25 March 2026 02:13:36 +0000 (0:00:01.258) 0:07:58.575 ******* 2026-03-25 02:13:42.084962 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:13:42.084973 | orchestrator | 2026-03-25 02:13:42.084984 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-25 02:13:42.084995 | orchestrator | Wednesday 25 March 2026 02:13:37 +0000 (0:00:00.919) 0:07:59.495 ******* 2026-03-25 02:13:42.085006 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:42.085017 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:42.085028 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:42.085039 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:42.085050 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:42.085061 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:42.085072 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:42.085217 | orchestrator | 2026-03-25 02:13:42.085262 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-25 02:13:42.085273 | orchestrator | Wednesday 25 March 2026 02:13:38 +0000 (0:00:01.112) 0:08:00.607 ******* 2026-03-25 02:13:42.085285 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:42.085296 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:42.085307 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:42.085318 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:42.085328 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:42.085339 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:42.085350 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:42.085360 | orchestrator | 2026-03-25 02:13:42.085371 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-25 02:13:42.085382 | orchestrator | Wednesday 25 March 2026 02:13:40 +0000 (0:00:01.179) 0:08:01.787 ******* 2026-03-25 02:13:42.085393 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:13:42.085404 | orchestrator | 2026-03-25 02:13:42.085415 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-25 02:13:42.085426 | orchestrator | Wednesday 25 March 2026 02:13:41 +0000 (0:00:01.091) 0:08:02.878 ******* 2026-03-25 02:13:42.085437 | orchestrator | ok: [testbed-manager] 2026-03-25 02:13:42.085448 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:13:42.085459 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:13:42.085469 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:13:42.085480 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:13:42.085491 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:13:42.085501 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:13:42.085512 | orchestrator | 2026-03-25 02:13:42.085540 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-25 02:13:43.736409 | orchestrator | Wednesday 25 March 2026 02:13:42 +0000 (0:00:00.925) 0:08:03.804 ******* 2026-03-25 02:13:43.736512 | orchestrator | changed: [testbed-manager] 2026-03-25 02:13:43.736527 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:13:43.736539 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:13:43.736550 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:13:43.736561 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:13:43.736572 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:13:43.736583 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:13:43.736621 | orchestrator | 2026-03-25 02:13:43.736634 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:13:43.736647 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-25 02:13:43.736659 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-25 02:13:43.736671 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-25 02:13:43.736682 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-25 02:13:43.736693 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-25 02:13:43.736703 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-25 02:13:43.736714 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-25 02:13:43.736725 | orchestrator | 2026-03-25 02:13:43.736736 | orchestrator | 2026-03-25 02:13:43.736747 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:13:43.736758 | orchestrator | Wednesday 25 March 2026 02:13:43 +0000 (0:00:01.123) 0:08:04.928 ******* 2026-03-25 02:13:43.736770 | orchestrator | =============================================================================== 2026-03-25 02:13:43.736780 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.42s 2026-03-25 02:13:43.736791 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.97s 2026-03-25 02:13:43.736802 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.63s 2026-03-25 02:13:43.736813 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.96s 2026-03-25 02:13:43.736824 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.84s 2026-03-25 02:13:43.736849 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.47s 2026-03-25 02:13:43.736861 | orchestrator | osism.services.docker : Install docker package -------------------------- 9.63s 2026-03-25 02:13:43.736873 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.23s 2026-03-25 02:13:43.736884 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.14s 2026-03-25 02:13:43.736895 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.52s 2026-03-25 02:13:43.736906 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.29s 2026-03-25 02:13:43.736917 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.00s 2026-03-25 02:13:43.736928 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.96s 2026-03-25 02:13:43.736939 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.80s 2026-03-25 02:13:43.736953 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.49s 2026-03-25 02:13:43.736965 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.46s 2026-03-25 02:13:43.736978 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.45s 2026-03-25 02:13:43.736991 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 6.02s 2026-03-25 02:13:43.737004 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.82s 2026-03-25 02:13:43.737016 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.60s 2026-03-25 02:13:44.098146 | orchestrator | + osism apply fail2ban 2026-03-25 02:13:57.278199 | orchestrator | 2026-03-25 02:13:57 | INFO  | Task a9fe518c-9ead-40f3-8f55-cdabbecd7fa0 (fail2ban) was prepared for execution. 2026-03-25 02:13:57.278289 | orchestrator | 2026-03-25 02:13:57 | INFO  | It takes a moment until task a9fe518c-9ead-40f3-8f55-cdabbecd7fa0 (fail2ban) has been started and output is visible here. 2026-03-25 02:14:20.571959 | orchestrator | 2026-03-25 02:14:20.572089 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-25 02:14:20.572114 | orchestrator | 2026-03-25 02:14:20.572241 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-25 02:14:20.572260 | orchestrator | Wednesday 25 March 2026 02:14:02 +0000 (0:00:00.298) 0:00:00.298 ******* 2026-03-25 02:14:20.572280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:14:20.572298 | orchestrator | 2026-03-25 02:14:20.572315 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-25 02:14:20.572331 | orchestrator | Wednesday 25 March 2026 02:14:03 +0000 (0:00:01.307) 0:00:01.606 ******* 2026-03-25 02:14:20.572347 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:14:20.572365 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:14:20.572381 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:14:20.572396 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:14:20.572412 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:14:20.572428 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:14:20.572443 | orchestrator | changed: [testbed-manager] 2026-03-25 02:14:20.572460 | orchestrator | 2026-03-25 02:14:20.572476 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-25 02:14:20.572494 | orchestrator | Wednesday 25 March 2026 02:14:14 +0000 (0:00:11.384) 0:00:12.990 ******* 2026-03-25 02:14:20.572511 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:14:20.572528 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:14:20.572545 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:14:20.572562 | orchestrator | changed: [testbed-manager] 2026-03-25 02:14:20.572579 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:14:20.572596 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:14:20.572613 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:14:20.572630 | orchestrator | 2026-03-25 02:14:20.572647 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-25 02:14:20.572665 | orchestrator | Wednesday 25 March 2026 02:14:16 +0000 (0:00:01.659) 0:00:14.649 ******* 2026-03-25 02:14:20.572682 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:14:20.572700 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:14:20.572717 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:14:20.572734 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:14:20.572751 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:14:20.572768 | orchestrator | ok: [testbed-manager] 2026-03-25 02:14:20.572785 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:14:20.572802 | orchestrator | 2026-03-25 02:14:20.572820 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-25 02:14:20.572837 | orchestrator | Wednesday 25 March 2026 02:14:18 +0000 (0:00:01.579) 0:00:16.229 ******* 2026-03-25 02:14:20.572853 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:14:20.572869 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:14:20.572886 | orchestrator | changed: [testbed-manager] 2026-03-25 02:14:20.572903 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:14:20.572921 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:14:20.572937 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:14:20.572952 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:14:20.572968 | orchestrator | 2026-03-25 02:14:20.572985 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:14:20.573002 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:14:20.573059 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:14:20.573071 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:14:20.573081 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:14:20.573091 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:14:20.573101 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:14:20.573110 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:14:20.573120 | orchestrator | 2026-03-25 02:14:20.573161 | orchestrator | 2026-03-25 02:14:20.573178 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:14:20.573196 | orchestrator | Wednesday 25 March 2026 02:14:20 +0000 (0:00:01.815) 0:00:18.045 ******* 2026-03-25 02:14:20.573212 | orchestrator | =============================================================================== 2026-03-25 02:14:20.573228 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.38s 2026-03-25 02:14:20.573238 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.82s 2026-03-25 02:14:20.573248 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.66s 2026-03-25 02:14:20.573257 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.58s 2026-03-25 02:14:20.573267 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.31s 2026-03-25 02:14:20.971923 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-25 02:14:20.971994 | orchestrator | + osism apply network 2026-03-25 02:14:33.402320 | orchestrator | 2026-03-25 02:14:33 | INFO  | Task 3081d6cf-f901-48fe-a41c-769722c962f6 (network) was prepared for execution. 2026-03-25 02:14:33.402435 | orchestrator | 2026-03-25 02:14:33 | INFO  | It takes a moment until task 3081d6cf-f901-48fe-a41c-769722c962f6 (network) has been started and output is visible here. 2026-03-25 02:15:04.069526 | orchestrator | 2026-03-25 02:15:04.069634 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-25 02:15:04.069645 | orchestrator | 2026-03-25 02:15:04.069653 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-25 02:15:04.069660 | orchestrator | Wednesday 25 March 2026 02:14:38 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-03-25 02:15:04.069667 | orchestrator | ok: [testbed-manager] 2026-03-25 02:15:04.069675 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:15:04.069682 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:15:04.069689 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:15:04.069695 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:15:04.069701 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:15:04.069708 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:15:04.069714 | orchestrator | 2026-03-25 02:15:04.069721 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-25 02:15:04.069728 | orchestrator | Wednesday 25 March 2026 02:14:38 +0000 (0:00:00.801) 0:00:01.081 ******* 2026-03-25 02:15:04.069736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:15:04.069745 | orchestrator | 2026-03-25 02:15:04.069751 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-25 02:15:04.069779 | orchestrator | Wednesday 25 March 2026 02:14:40 +0000 (0:00:01.307) 0:00:02.388 ******* 2026-03-25 02:15:04.069786 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:15:04.069793 | orchestrator | ok: [testbed-manager] 2026-03-25 02:15:04.069798 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:15:04.069805 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:15:04.069810 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:15:04.069817 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:15:04.069823 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:15:04.069831 | orchestrator | 2026-03-25 02:15:04.069837 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-25 02:15:04.069843 | orchestrator | Wednesday 25 March 2026 02:14:42 +0000 (0:00:02.032) 0:00:04.421 ******* 2026-03-25 02:15:04.069850 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:15:04.069856 | orchestrator | ok: [testbed-manager] 2026-03-25 02:15:04.069863 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:15:04.069870 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:15:04.069876 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:15:04.069883 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:15:04.069889 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:15:04.069896 | orchestrator | 2026-03-25 02:15:04.069903 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-25 02:15:04.069910 | orchestrator | Wednesday 25 March 2026 02:14:43 +0000 (0:00:01.799) 0:00:06.221 ******* 2026-03-25 02:15:04.069917 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-25 02:15:04.069924 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-25 02:15:04.069930 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-25 02:15:04.069937 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-25 02:15:04.069943 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-25 02:15:04.069949 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-25 02:15:04.069956 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-25 02:15:04.069963 | orchestrator | 2026-03-25 02:15:04.069986 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-25 02:15:04.069997 | orchestrator | Wednesday 25 March 2026 02:14:44 +0000 (0:00:01.019) 0:00:07.240 ******* 2026-03-25 02:15:04.070004 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-25 02:15:04.070012 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-25 02:15:04.070078 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 02:15:04.070085 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 02:15:04.070093 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 02:15:04.070101 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 02:15:04.070108 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 02:15:04.070116 | orchestrator | 2026-03-25 02:15:04.070125 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-25 02:15:04.070133 | orchestrator | Wednesday 25 March 2026 02:14:48 +0000 (0:00:03.697) 0:00:10.938 ******* 2026-03-25 02:15:04.070142 | orchestrator | changed: [testbed-manager] 2026-03-25 02:15:04.070150 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:15:04.070159 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:15:04.070211 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:15:04.070218 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:15:04.070226 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:15:04.070232 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:15:04.070239 | orchestrator | 2026-03-25 02:15:04.070246 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-25 02:15:04.070254 | orchestrator | Wednesday 25 March 2026 02:14:50 +0000 (0:00:01.669) 0:00:12.608 ******* 2026-03-25 02:15:04.070261 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 02:15:04.070268 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 02:15:04.070275 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-25 02:15:04.070283 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-25 02:15:04.070302 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 02:15:04.070309 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 02:15:04.070316 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 02:15:04.070322 | orchestrator | 2026-03-25 02:15:04.070328 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-25 02:15:04.070334 | orchestrator | Wednesday 25 March 2026 02:14:52 +0000 (0:00:01.810) 0:00:14.418 ******* 2026-03-25 02:15:04.070341 | orchestrator | ok: [testbed-manager] 2026-03-25 02:15:04.070348 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:15:04.070355 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:15:04.070361 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:15:04.070367 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:15:04.070374 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:15:04.070380 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:15:04.070386 | orchestrator | 2026-03-25 02:15:04.070393 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-25 02:15:04.070419 | orchestrator | Wednesday 25 March 2026 02:14:53 +0000 (0:00:01.220) 0:00:15.639 ******* 2026-03-25 02:15:04.070425 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:15:04.070432 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:15:04.070438 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:15:04.070445 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:15:04.070451 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:15:04.070457 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:15:04.070464 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:15:04.070469 | orchestrator | 2026-03-25 02:15:04.070475 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-25 02:15:04.070481 | orchestrator | Wednesday 25 March 2026 02:14:54 +0000 (0:00:00.715) 0:00:16.355 ******* 2026-03-25 02:15:04.070487 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:15:04.070493 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:15:04.070499 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:15:04.070506 | orchestrator | ok: [testbed-manager] 2026-03-25 02:15:04.070513 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:15:04.070520 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:15:04.070526 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:15:04.070532 | orchestrator | 2026-03-25 02:15:04.070539 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-25 02:15:04.070546 | orchestrator | Wednesday 25 March 2026 02:14:56 +0000 (0:00:02.236) 0:00:18.592 ******* 2026-03-25 02:15:04.070553 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:15:04.070559 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:15:04.070565 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:15:04.070572 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:15:04.070579 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:15:04.070585 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:15:04.070592 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-25 02:15:04.070600 | orchestrator | 2026-03-25 02:15:04.070607 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-25 02:15:04.070614 | orchestrator | Wednesday 25 March 2026 02:14:57 +0000 (0:00:01.072) 0:00:19.664 ******* 2026-03-25 02:15:04.070620 | orchestrator | ok: [testbed-manager] 2026-03-25 02:15:04.070627 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:15:04.070634 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:15:04.070640 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:15:04.070647 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:15:04.070653 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:15:04.070660 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:15:04.070667 | orchestrator | 2026-03-25 02:15:04.070674 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-25 02:15:04.070680 | orchestrator | Wednesday 25 March 2026 02:14:59 +0000 (0:00:01.948) 0:00:21.613 ******* 2026-03-25 02:15:04.070687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:15:04.070704 | orchestrator | 2026-03-25 02:15:04.070710 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-25 02:15:04.070717 | orchestrator | Wednesday 25 March 2026 02:15:00 +0000 (0:00:01.388) 0:00:23.002 ******* 2026-03-25 02:15:04.070723 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:15:04.070730 | orchestrator | ok: [testbed-manager] 2026-03-25 02:15:04.070736 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:15:04.070743 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:15:04.070755 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:15:04.070762 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:15:04.070769 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:15:04.070775 | orchestrator | 2026-03-25 02:15:04.070781 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-25 02:15:04.070788 | orchestrator | Wednesday 25 March 2026 02:15:01 +0000 (0:00:01.034) 0:00:24.036 ******* 2026-03-25 02:15:04.070795 | orchestrator | ok: [testbed-manager] 2026-03-25 02:15:04.070801 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:15:04.070807 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:15:04.070813 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:15:04.070819 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:15:04.070826 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:15:04.070832 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:15:04.070838 | orchestrator | 2026-03-25 02:15:04.070845 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-25 02:15:04.070852 | orchestrator | Wednesday 25 March 2026 02:15:02 +0000 (0:00:00.922) 0:00:24.959 ******* 2026-03-25 02:15:04.070859 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-25 02:15:04.070866 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-25 02:15:04.070872 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-25 02:15:04.070879 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-25 02:15:04.070886 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-25 02:15:04.070892 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-25 02:15:04.070898 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-25 02:15:04.070905 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-25 02:15:04.070912 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-25 02:15:04.070918 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-25 02:15:04.070925 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-25 02:15:04.070931 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-25 02:15:04.070938 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-25 02:15:04.070944 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-25 02:15:04.070951 | orchestrator | 2026-03-25 02:15:04.070966 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-25 02:15:23.006473 | orchestrator | Wednesday 25 March 2026 02:15:04 +0000 (0:00:01.358) 0:00:26.317 ******* 2026-03-25 02:15:23.006551 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:15:23.006558 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:15:23.006563 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:15:23.006567 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:15:23.006571 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:15:23.006575 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:15:23.006579 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:15:23.006583 | orchestrator | 2026-03-25 02:15:23.006602 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-25 02:15:23.006607 | orchestrator | Wednesday 25 March 2026 02:15:04 +0000 (0:00:00.778) 0:00:27.095 ******* 2026-03-25 02:15:23.006613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-5, testbed-node-4, testbed-node-2, testbed-node-3 2026-03-25 02:15:23.006618 | orchestrator | 2026-03-25 02:15:23.006622 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-25 02:15:23.006626 | orchestrator | Wednesday 25 March 2026 02:15:09 +0000 (0:00:05.142) 0:00:32.238 ******* 2026-03-25 02:15:23.006631 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006646 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006662 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006711 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006715 | orchestrator | 2026-03-25 02:15:23.006719 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-25 02:15:23.006723 | orchestrator | Wednesday 25 March 2026 02:15:16 +0000 (0:00:06.776) 0:00:39.015 ******* 2026-03-25 02:15:23.006727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006734 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-25 02:15:23.006765 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:23.006784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:29.826249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-25 02:15:29.826369 | orchestrator | 2026-03-25 02:15:29.826395 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-25 02:15:29.826416 | orchestrator | Wednesday 25 March 2026 02:15:22 +0000 (0:00:06.231) 0:00:45.246 ******* 2026-03-25 02:15:29.826435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:15:29.826452 | orchestrator | 2026-03-25 02:15:29.826470 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-25 02:15:29.826486 | orchestrator | Wednesday 25 March 2026 02:15:24 +0000 (0:00:01.428) 0:00:46.674 ******* 2026-03-25 02:15:29.826504 | orchestrator | ok: [testbed-manager] 2026-03-25 02:15:29.826522 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:15:29.826540 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:15:29.826557 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:15:29.826575 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:15:29.826593 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:15:29.826611 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:15:29.826628 | orchestrator | 2026-03-25 02:15:29.826647 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-25 02:15:29.826667 | orchestrator | Wednesday 25 March 2026 02:15:25 +0000 (0:00:01.297) 0:00:47.971 ******* 2026-03-25 02:15:29.826687 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-25 02:15:29.826705 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-25 02:15:29.826724 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-25 02:15:29.826743 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-25 02:15:29.826763 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-25 02:15:29.826783 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-25 02:15:29.826802 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-25 02:15:29.826825 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-25 02:15:29.826845 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:15:29.826866 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-25 02:15:29.826886 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-25 02:15:29.826927 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-25 02:15:29.826949 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-25 02:15:29.826968 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:15:29.827019 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-25 02:15:29.827034 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-25 02:15:29.827047 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-25 02:15:29.827058 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-25 02:15:29.827069 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:15:29.827080 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-25 02:15:29.827091 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-25 02:15:29.827102 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-25 02:15:29.827113 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-25 02:15:29.827124 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:15:29.827134 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-25 02:15:29.827145 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-25 02:15:29.827156 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-25 02:15:29.827166 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:15:29.827177 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-25 02:15:29.827275 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:15:29.827288 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-25 02:15:29.827299 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-25 02:15:29.827310 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-25 02:15:29.827321 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-25 02:15:29.827332 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:15:29.827343 | orchestrator | 2026-03-25 02:15:29.827354 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-25 02:15:29.827390 | orchestrator | Wednesday 25 March 2026 02:15:27 +0000 (0:00:02.221) 0:00:50.193 ******* 2026-03-25 02:15:29.827402 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:15:29.827413 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:15:29.827424 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:15:29.827435 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:15:29.827445 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:15:29.827456 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:15:29.827467 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:15:29.827478 | orchestrator | 2026-03-25 02:15:29.827488 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-25 02:15:29.827499 | orchestrator | Wednesday 25 March 2026 02:15:28 +0000 (0:00:00.681) 0:00:50.874 ******* 2026-03-25 02:15:29.827510 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:15:29.827521 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:15:29.827532 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:15:29.827543 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:15:29.827554 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:15:29.827565 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:15:29.827576 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:15:29.827586 | orchestrator | 2026-03-25 02:15:29.827597 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:15:29.827610 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 02:15:29.827622 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 02:15:29.827643 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 02:15:29.827655 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 02:15:29.827666 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 02:15:29.827676 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 02:15:29.827687 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 02:15:29.827698 | orchestrator | 2026-03-25 02:15:29.827709 | orchestrator | 2026-03-25 02:15:29.827720 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:15:29.827732 | orchestrator | Wednesday 25 March 2026 02:15:29 +0000 (0:00:00.772) 0:00:51.647 ******* 2026-03-25 02:15:29.827750 | orchestrator | =============================================================================== 2026-03-25 02:15:29.827761 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.78s 2026-03-25 02:15:29.827772 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.23s 2026-03-25 02:15:29.827783 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.14s 2026-03-25 02:15:29.827794 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.70s 2026-03-25 02:15:29.827804 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.24s 2026-03-25 02:15:29.827815 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.22s 2026-03-25 02:15:29.827826 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.03s 2026-03-25 02:15:29.827837 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.95s 2026-03-25 02:15:29.827848 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.81s 2026-03-25 02:15:29.827859 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.80s 2026-03-25 02:15:29.827869 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.67s 2026-03-25 02:15:29.827880 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.43s 2026-03-25 02:15:29.827891 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.39s 2026-03-25 02:15:29.827902 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.36s 2026-03-25 02:15:29.827913 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.31s 2026-03-25 02:15:29.827924 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.30s 2026-03-25 02:15:29.827934 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.22s 2026-03-25 02:15:29.827945 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.07s 2026-03-25 02:15:29.827956 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2026-03-25 02:15:29.827967 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2026-03-25 02:15:30.186412 | orchestrator | + osism apply wireguard 2026-03-25 02:15:42.520048 | orchestrator | 2026-03-25 02:15:42 | INFO  | Task 0d191255-1c0f-48ba-bb81-35f1d60b04d0 (wireguard) was prepared for execution. 2026-03-25 02:15:42.520180 | orchestrator | 2026-03-25 02:15:42 | INFO  | It takes a moment until task 0d191255-1c0f-48ba-bb81-35f1d60b04d0 (wireguard) has been started and output is visible here. 2026-03-25 02:16:04.736697 | orchestrator | 2026-03-25 02:16:04.736787 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-25 02:16:04.736812 | orchestrator | 2026-03-25 02:16:04.736817 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-25 02:16:04.736821 | orchestrator | Wednesday 25 March 2026 02:15:47 +0000 (0:00:00.243) 0:00:00.243 ******* 2026-03-25 02:16:04.736825 | orchestrator | ok: [testbed-manager] 2026-03-25 02:16:04.736830 | orchestrator | 2026-03-25 02:16:04.736834 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-25 02:16:04.736838 | orchestrator | Wednesday 25 March 2026 02:15:48 +0000 (0:00:01.739) 0:00:01.983 ******* 2026-03-25 02:16:04.736843 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:04.736849 | orchestrator | 2026-03-25 02:16:04.736854 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-25 02:16:04.736858 | orchestrator | Wednesday 25 March 2026 02:15:56 +0000 (0:00:07.553) 0:00:09.536 ******* 2026-03-25 02:16:04.736862 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:04.736865 | orchestrator | 2026-03-25 02:16:04.736869 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-25 02:16:04.736873 | orchestrator | Wednesday 25 March 2026 02:15:57 +0000 (0:00:00.621) 0:00:10.158 ******* 2026-03-25 02:16:04.736877 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:04.736881 | orchestrator | 2026-03-25 02:16:04.736885 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-25 02:16:04.736889 | orchestrator | Wednesday 25 March 2026 02:15:57 +0000 (0:00:00.453) 0:00:10.611 ******* 2026-03-25 02:16:04.736892 | orchestrator | ok: [testbed-manager] 2026-03-25 02:16:04.736896 | orchestrator | 2026-03-25 02:16:04.736900 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-25 02:16:04.736904 | orchestrator | Wednesday 25 March 2026 02:15:58 +0000 (0:00:00.768) 0:00:11.380 ******* 2026-03-25 02:16:04.736908 | orchestrator | ok: [testbed-manager] 2026-03-25 02:16:04.736912 | orchestrator | 2026-03-25 02:16:04.736916 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-25 02:16:04.736919 | orchestrator | Wednesday 25 March 2026 02:15:58 +0000 (0:00:00.469) 0:00:11.849 ******* 2026-03-25 02:16:04.736923 | orchestrator | ok: [testbed-manager] 2026-03-25 02:16:04.736927 | orchestrator | 2026-03-25 02:16:04.736931 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-25 02:16:04.736935 | orchestrator | Wednesday 25 March 2026 02:15:59 +0000 (0:00:00.444) 0:00:12.293 ******* 2026-03-25 02:16:04.736938 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:04.736942 | orchestrator | 2026-03-25 02:16:04.736946 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-25 02:16:04.736950 | orchestrator | Wednesday 25 March 2026 02:16:00 +0000 (0:00:01.285) 0:00:13.578 ******* 2026-03-25 02:16:04.736954 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-25 02:16:04.736958 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:04.736962 | orchestrator | 2026-03-25 02:16:04.736966 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-25 02:16:04.736970 | orchestrator | Wednesday 25 March 2026 02:16:01 +0000 (0:00:01.010) 0:00:14.589 ******* 2026-03-25 02:16:04.736974 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:04.736978 | orchestrator | 2026-03-25 02:16:04.736982 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-25 02:16:04.736986 | orchestrator | Wednesday 25 March 2026 02:16:03 +0000 (0:00:01.824) 0:00:16.414 ******* 2026-03-25 02:16:04.736990 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:04.736994 | orchestrator | 2026-03-25 02:16:04.736998 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:16:04.737002 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:16:04.737007 | orchestrator | 2026-03-25 02:16:04.737010 | orchestrator | 2026-03-25 02:16:04.737015 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:16:04.737023 | orchestrator | Wednesday 25 March 2026 02:16:04 +0000 (0:00:00.990) 0:00:17.404 ******* 2026-03-25 02:16:04.737027 | orchestrator | =============================================================================== 2026-03-25 02:16:04.737031 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.55s 2026-03-25 02:16:04.737035 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.82s 2026-03-25 02:16:04.737039 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.74s 2026-03-25 02:16:04.737042 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.29s 2026-03-25 02:16:04.737046 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2026-03-25 02:16:04.737050 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2026-03-25 02:16:04.737054 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.77s 2026-03-25 02:16:04.737058 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.62s 2026-03-25 02:16:04.737061 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.47s 2026-03-25 02:16:04.737065 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-03-25 02:16:04.737069 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2026-03-25 02:16:05.148332 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-25 02:16:05.184395 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-25 02:16:05.184513 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-25 02:16:05.269088 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 164 0 --:--:-- --:--:-- --:--:-- 166 2026-03-25 02:16:05.285312 | orchestrator | + osism apply --environment custom workarounds 2026-03-25 02:16:07.460645 | orchestrator | 2026-03-25 02:16:07 | INFO  | Trying to run play workarounds in environment custom 2026-03-25 02:16:17.647606 | orchestrator | 2026-03-25 02:16:17 | INFO  | Task 947649b3-190a-470f-87a3-93b11a03b06a (workarounds) was prepared for execution. 2026-03-25 02:16:17.647802 | orchestrator | 2026-03-25 02:16:17 | INFO  | It takes a moment until task 947649b3-190a-470f-87a3-93b11a03b06a (workarounds) has been started and output is visible here. 2026-03-25 02:16:44.464050 | orchestrator | 2026-03-25 02:16:44.464196 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 02:16:44.464222 | orchestrator | 2026-03-25 02:16:44.464235 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-25 02:16:44.464247 | orchestrator | Wednesday 25 March 2026 02:16:22 +0000 (0:00:00.154) 0:00:00.154 ******* 2026-03-25 02:16:44.464286 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-25 02:16:44.464300 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-25 02:16:44.464311 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-25 02:16:44.464322 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-25 02:16:44.464333 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-25 02:16:44.464344 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-25 02:16:44.464355 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-25 02:16:44.464365 | orchestrator | 2026-03-25 02:16:44.464376 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-25 02:16:44.464387 | orchestrator | 2026-03-25 02:16:44.464398 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-25 02:16:44.464409 | orchestrator | Wednesday 25 March 2026 02:16:23 +0000 (0:00:00.882) 0:00:01.037 ******* 2026-03-25 02:16:44.464420 | orchestrator | ok: [testbed-manager] 2026-03-25 02:16:44.464457 | orchestrator | 2026-03-25 02:16:44.464469 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-25 02:16:44.464480 | orchestrator | 2026-03-25 02:16:44.464491 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-25 02:16:44.464502 | orchestrator | Wednesday 25 March 2026 02:16:25 +0000 (0:00:02.667) 0:00:03.705 ******* 2026-03-25 02:16:44.464513 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:16:44.464524 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:16:44.464535 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:16:44.464545 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:16:44.464556 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:16:44.464566 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:16:44.464577 | orchestrator | 2026-03-25 02:16:44.464588 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-25 02:16:44.464598 | orchestrator | 2026-03-25 02:16:44.464609 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-25 02:16:44.464635 | orchestrator | Wednesday 25 March 2026 02:16:27 +0000 (0:00:01.899) 0:00:05.604 ******* 2026-03-25 02:16:44.464647 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-25 02:16:44.464659 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-25 02:16:44.464670 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-25 02:16:44.464681 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-25 02:16:44.464691 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-25 02:16:44.464702 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-25 02:16:44.464713 | orchestrator | 2026-03-25 02:16:44.464723 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-25 02:16:44.464734 | orchestrator | Wednesday 25 March 2026 02:16:29 +0000 (0:00:01.597) 0:00:07.201 ******* 2026-03-25 02:16:44.464745 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:16:44.464756 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:16:44.464767 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:16:44.464777 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:16:44.464788 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:16:44.464798 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:16:44.464809 | orchestrator | 2026-03-25 02:16:44.464820 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-25 02:16:44.464831 | orchestrator | Wednesday 25 March 2026 02:16:32 +0000 (0:00:03.675) 0:00:10.877 ******* 2026-03-25 02:16:44.464841 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:16:44.464853 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:16:44.464864 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:16:44.464874 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:16:44.464885 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:16:44.464896 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:16:44.464907 | orchestrator | 2026-03-25 02:16:44.464917 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-25 02:16:44.464928 | orchestrator | 2026-03-25 02:16:44.464939 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-25 02:16:44.464950 | orchestrator | Wednesday 25 March 2026 02:16:33 +0000 (0:00:00.753) 0:00:11.631 ******* 2026-03-25 02:16:44.464960 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:16:44.464971 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:16:44.464982 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:16:44.464993 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:16:44.465003 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:16:44.465014 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:16:44.465033 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:44.465043 | orchestrator | 2026-03-25 02:16:44.465054 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-25 02:16:44.465066 | orchestrator | Wednesday 25 March 2026 02:16:35 +0000 (0:00:01.650) 0:00:13.281 ******* 2026-03-25 02:16:44.465085 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:16:44.465103 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:16:44.465121 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:16:44.465139 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:16:44.465159 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:16:44.465177 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:16:44.465216 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:44.465228 | orchestrator | 2026-03-25 02:16:44.465239 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-25 02:16:44.465250 | orchestrator | Wednesday 25 March 2026 02:16:37 +0000 (0:00:01.778) 0:00:15.060 ******* 2026-03-25 02:16:44.465285 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:16:44.465297 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:16:44.465307 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:16:44.465318 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:16:44.465329 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:16:44.465340 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:16:44.465351 | orchestrator | ok: [testbed-manager] 2026-03-25 02:16:44.465362 | orchestrator | 2026-03-25 02:16:44.465373 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-25 02:16:44.465384 | orchestrator | Wednesday 25 March 2026 02:16:38 +0000 (0:00:01.646) 0:00:16.707 ******* 2026-03-25 02:16:44.465395 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:16:44.465405 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:16:44.465416 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:16:44.465427 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:16:44.465438 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:16:44.465449 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:16:44.465459 | orchestrator | changed: [testbed-manager] 2026-03-25 02:16:44.465470 | orchestrator | 2026-03-25 02:16:44.465481 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-25 02:16:44.465492 | orchestrator | Wednesday 25 March 2026 02:16:40 +0000 (0:00:02.025) 0:00:18.733 ******* 2026-03-25 02:16:44.465503 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:16:44.465514 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:16:44.465524 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:16:44.465535 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:16:44.465546 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:16:44.465557 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:16:44.465568 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:16:44.465579 | orchestrator | 2026-03-25 02:16:44.465590 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-25 02:16:44.465600 | orchestrator | 2026-03-25 02:16:44.465611 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-25 02:16:44.465623 | orchestrator | Wednesday 25 March 2026 02:16:41 +0000 (0:00:00.642) 0:00:19.376 ******* 2026-03-25 02:16:44.465633 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:16:44.465644 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:16:44.465655 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:16:44.465666 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:16:44.465677 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:16:44.465694 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:16:44.465705 | orchestrator | ok: [testbed-manager] 2026-03-25 02:16:44.465716 | orchestrator | 2026-03-25 02:16:44.465727 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:16:44.465740 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:16:44.465752 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:16:44.465771 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:16:44.465783 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:16:44.465794 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:16:44.465805 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:16:44.465816 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:16:44.465827 | orchestrator | 2026-03-25 02:16:44.465837 | orchestrator | 2026-03-25 02:16:44.465848 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:16:44.465859 | orchestrator | Wednesday 25 March 2026 02:16:44 +0000 (0:00:02.964) 0:00:22.340 ******* 2026-03-25 02:16:44.465870 | orchestrator | =============================================================================== 2026-03-25 02:16:44.465881 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.68s 2026-03-25 02:16:44.465892 | orchestrator | Install python3-docker -------------------------------------------------- 2.96s 2026-03-25 02:16:44.465903 | orchestrator | Apply netplan configuration --------------------------------------------- 2.67s 2026-03-25 02:16:44.465914 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.03s 2026-03-25 02:16:44.465925 | orchestrator | Apply netplan configuration --------------------------------------------- 1.90s 2026-03-25 02:16:44.465936 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.78s 2026-03-25 02:16:44.465946 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2026-03-25 02:16:44.465957 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.65s 2026-03-25 02:16:44.465968 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.60s 2026-03-25 02:16:44.465979 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.88s 2026-03-25 02:16:44.465990 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2026-03-25 02:16:44.466008 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2026-03-25 02:16:45.235602 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-25 02:16:57.578645 | orchestrator | 2026-03-25 02:16:57 | INFO  | Task b3704390-f88f-41a7-ae38-85ffd8eaada9 (reboot) was prepared for execution. 2026-03-25 02:16:57.578760 | orchestrator | 2026-03-25 02:16:57 | INFO  | It takes a moment until task b3704390-f88f-41a7-ae38-85ffd8eaada9 (reboot) has been started and output is visible here. 2026-03-25 02:17:08.402992 | orchestrator | 2026-03-25 02:17:08.403085 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-25 02:17:08.403095 | orchestrator | 2026-03-25 02:17:08.403103 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-25 02:17:08.403110 | orchestrator | Wednesday 25 March 2026 02:17:02 +0000 (0:00:00.241) 0:00:00.241 ******* 2026-03-25 02:17:08.403117 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:17:08.403124 | orchestrator | 2026-03-25 02:17:08.403131 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-25 02:17:08.403137 | orchestrator | Wednesday 25 March 2026 02:17:02 +0000 (0:00:00.124) 0:00:00.366 ******* 2026-03-25 02:17:08.403144 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:17:08.403150 | orchestrator | 2026-03-25 02:17:08.403156 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-25 02:17:08.403182 | orchestrator | Wednesday 25 March 2026 02:17:03 +0000 (0:00:00.932) 0:00:01.298 ******* 2026-03-25 02:17:08.403189 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:17:08.403197 | orchestrator | 2026-03-25 02:17:08.403207 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-25 02:17:08.403217 | orchestrator | 2026-03-25 02:17:08.403228 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-25 02:17:08.403238 | orchestrator | Wednesday 25 March 2026 02:17:03 +0000 (0:00:00.145) 0:00:01.444 ******* 2026-03-25 02:17:08.403248 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:17:08.403258 | orchestrator | 2026-03-25 02:17:08.403269 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-25 02:17:08.403331 | orchestrator | Wednesday 25 March 2026 02:17:03 +0000 (0:00:00.121) 0:00:01.566 ******* 2026-03-25 02:17:08.403341 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:17:08.403352 | orchestrator | 2026-03-25 02:17:08.403362 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-25 02:17:08.403386 | orchestrator | Wednesday 25 March 2026 02:17:04 +0000 (0:00:00.695) 0:00:02.261 ******* 2026-03-25 02:17:08.403397 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:17:08.403407 | orchestrator | 2026-03-25 02:17:08.403418 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-25 02:17:08.403425 | orchestrator | 2026-03-25 02:17:08.403431 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-25 02:17:08.403437 | orchestrator | Wednesday 25 March 2026 02:17:04 +0000 (0:00:00.119) 0:00:02.381 ******* 2026-03-25 02:17:08.403444 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:17:08.403450 | orchestrator | 2026-03-25 02:17:08.403456 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-25 02:17:08.403462 | orchestrator | Wednesday 25 March 2026 02:17:04 +0000 (0:00:00.248) 0:00:02.630 ******* 2026-03-25 02:17:08.403469 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:17:08.403475 | orchestrator | 2026-03-25 02:17:08.403482 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-25 02:17:08.403488 | orchestrator | Wednesday 25 March 2026 02:17:05 +0000 (0:00:00.628) 0:00:03.258 ******* 2026-03-25 02:17:08.403494 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:17:08.403500 | orchestrator | 2026-03-25 02:17:08.403507 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-25 02:17:08.403513 | orchestrator | 2026-03-25 02:17:08.403519 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-25 02:17:08.403525 | orchestrator | Wednesday 25 March 2026 02:17:05 +0000 (0:00:00.140) 0:00:03.399 ******* 2026-03-25 02:17:08.403531 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:17:08.403538 | orchestrator | 2026-03-25 02:17:08.403544 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-25 02:17:08.403550 | orchestrator | Wednesday 25 March 2026 02:17:05 +0000 (0:00:00.122) 0:00:03.521 ******* 2026-03-25 02:17:08.403557 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:17:08.403563 | orchestrator | 2026-03-25 02:17:08.403569 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-25 02:17:08.403576 | orchestrator | Wednesday 25 March 2026 02:17:06 +0000 (0:00:00.630) 0:00:04.152 ******* 2026-03-25 02:17:08.403582 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:17:08.403588 | orchestrator | 2026-03-25 02:17:08.403595 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-25 02:17:08.403601 | orchestrator | 2026-03-25 02:17:08.403607 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-25 02:17:08.403613 | orchestrator | Wednesday 25 March 2026 02:17:06 +0000 (0:00:00.123) 0:00:04.275 ******* 2026-03-25 02:17:08.403620 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:17:08.403626 | orchestrator | 2026-03-25 02:17:08.403632 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-25 02:17:08.403650 | orchestrator | Wednesday 25 March 2026 02:17:06 +0000 (0:00:00.127) 0:00:04.402 ******* 2026-03-25 02:17:08.403660 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:17:08.403670 | orchestrator | 2026-03-25 02:17:08.403679 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-25 02:17:08.403690 | orchestrator | Wednesday 25 March 2026 02:17:07 +0000 (0:00:00.668) 0:00:05.071 ******* 2026-03-25 02:17:08.403700 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:17:08.403711 | orchestrator | 2026-03-25 02:17:08.403720 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-25 02:17:08.403726 | orchestrator | 2026-03-25 02:17:08.403733 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-25 02:17:08.403739 | orchestrator | Wednesday 25 March 2026 02:17:07 +0000 (0:00:00.122) 0:00:05.194 ******* 2026-03-25 02:17:08.403745 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:17:08.403751 | orchestrator | 2026-03-25 02:17:08.403758 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-25 02:17:08.403764 | orchestrator | Wednesday 25 March 2026 02:17:07 +0000 (0:00:00.119) 0:00:05.314 ******* 2026-03-25 02:17:08.403770 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:17:08.403777 | orchestrator | 2026-03-25 02:17:08.403783 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-25 02:17:08.403789 | orchestrator | Wednesday 25 March 2026 02:17:07 +0000 (0:00:00.651) 0:00:05.966 ******* 2026-03-25 02:17:08.403811 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:17:08.403818 | orchestrator | 2026-03-25 02:17:08.403825 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:17:08.403832 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:17:08.403840 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:17:08.403846 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:17:08.403853 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:17:08.403859 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:17:08.403866 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:17:08.403872 | orchestrator | 2026-03-25 02:17:08.403878 | orchestrator | 2026-03-25 02:17:08.403884 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:17:08.403891 | orchestrator | Wednesday 25 March 2026 02:17:07 +0000 (0:00:00.031) 0:00:05.998 ******* 2026-03-25 02:17:08.403902 | orchestrator | =============================================================================== 2026-03-25 02:17:08.403909 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.21s 2026-03-25 02:17:08.403915 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.86s 2026-03-25 02:17:08.403922 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2026-03-25 02:17:08.787195 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-25 02:17:21.040930 | orchestrator | 2026-03-25 02:17:21 | INFO  | Task a5e76f3b-91da-476c-aaa8-1094d0dea56d (wait-for-connection) was prepared for execution. 2026-03-25 02:17:21.041041 | orchestrator | 2026-03-25 02:17:21 | INFO  | It takes a moment until task a5e76f3b-91da-476c-aaa8-1094d0dea56d (wait-for-connection) has been started and output is visible here. 2026-03-25 02:17:37.618390 | orchestrator | 2026-03-25 02:17:37.618512 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-25 02:17:37.618531 | orchestrator | 2026-03-25 02:17:37.618545 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-25 02:17:37.618557 | orchestrator | Wednesday 25 March 2026 02:17:25 +0000 (0:00:00.265) 0:00:00.265 ******* 2026-03-25 02:17:37.618568 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:17:37.618581 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:17:37.618592 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:17:37.618603 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:17:37.618614 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:17:37.618625 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:17:37.618635 | orchestrator | 2026-03-25 02:17:37.618647 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:17:37.618659 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:17:37.618672 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:17:37.618683 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:17:37.618695 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:17:37.618706 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:17:37.618717 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:17:37.618728 | orchestrator | 2026-03-25 02:17:37.618739 | orchestrator | 2026-03-25 02:17:37.618750 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:17:37.618761 | orchestrator | Wednesday 25 March 2026 02:17:37 +0000 (0:00:11.593) 0:00:11.859 ******* 2026-03-25 02:17:37.618772 | orchestrator | =============================================================================== 2026-03-25 02:17:37.618784 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.59s 2026-03-25 02:17:37.966955 | orchestrator | + osism apply hddtemp 2026-03-25 02:17:50.162177 | orchestrator | 2026-03-25 02:17:50 | INFO  | Task 5a7a33bc-f5cd-4f83-9a8f-22de82873e4b (hddtemp) was prepared for execution. 2026-03-25 02:17:50.162254 | orchestrator | 2026-03-25 02:17:50 | INFO  | It takes a moment until task 5a7a33bc-f5cd-4f83-9a8f-22de82873e4b (hddtemp) has been started and output is visible here. 2026-03-25 02:18:18.134605 | orchestrator | 2026-03-25 02:18:18.134695 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-25 02:18:18.134706 | orchestrator | 2026-03-25 02:18:18.134714 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-25 02:18:18.134721 | orchestrator | Wednesday 25 March 2026 02:17:54 +0000 (0:00:00.283) 0:00:00.283 ******* 2026-03-25 02:18:18.134728 | orchestrator | ok: [testbed-manager] 2026-03-25 02:18:18.134736 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:18:18.134742 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:18:18.134749 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:18:18.134758 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:18:18.134768 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:18:18.134778 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:18:18.134789 | orchestrator | 2026-03-25 02:18:18.134800 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-25 02:18:18.134810 | orchestrator | Wednesday 25 March 2026 02:17:55 +0000 (0:00:00.784) 0:00:01.068 ******* 2026-03-25 02:18:18.134823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:18:18.134861 | orchestrator | 2026-03-25 02:18:18.134886 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-25 02:18:18.134906 | orchestrator | Wednesday 25 March 2026 02:17:56 +0000 (0:00:01.326) 0:00:02.395 ******* 2026-03-25 02:18:18.134918 | orchestrator | ok: [testbed-manager] 2026-03-25 02:18:18.134927 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:18:18.134936 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:18:18.134946 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:18:18.134956 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:18:18.134967 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:18:18.134978 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:18:18.134988 | orchestrator | 2026-03-25 02:18:18.134998 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-25 02:18:18.135021 | orchestrator | Wednesday 25 March 2026 02:17:58 +0000 (0:00:01.934) 0:00:04.329 ******* 2026-03-25 02:18:18.135028 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:18:18.135035 | orchestrator | changed: [testbed-manager] 2026-03-25 02:18:18.135041 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:18:18.135047 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:18:18.135053 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:18:18.135059 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:18:18.135065 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:18:18.135071 | orchestrator | 2026-03-25 02:18:18.135077 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-25 02:18:18.135084 | orchestrator | Wednesday 25 March 2026 02:18:00 +0000 (0:00:01.304) 0:00:05.634 ******* 2026-03-25 02:18:18.135090 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:18:18.135096 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:18:18.135102 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:18:18.135108 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:18:18.135114 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:18:18.135120 | orchestrator | ok: [testbed-manager] 2026-03-25 02:18:18.135127 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:18:18.135133 | orchestrator | 2026-03-25 02:18:18.135139 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-25 02:18:18.135145 | orchestrator | Wednesday 25 March 2026 02:18:01 +0000 (0:00:01.771) 0:00:07.405 ******* 2026-03-25 02:18:18.135151 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:18:18.135158 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:18:18.135168 | orchestrator | changed: [testbed-manager] 2026-03-25 02:18:18.135178 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:18:18.135188 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:18:18.135198 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:18:18.135208 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:18:18.135219 | orchestrator | 2026-03-25 02:18:18.135230 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-25 02:18:18.135240 | orchestrator | Wednesday 25 March 2026 02:18:02 +0000 (0:00:00.899) 0:00:08.305 ******* 2026-03-25 02:18:18.135251 | orchestrator | changed: [testbed-manager] 2026-03-25 02:18:18.135258 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:18:18.135264 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:18:18.135270 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:18:18.135276 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:18:18.135282 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:18:18.135288 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:18:18.135294 | orchestrator | 2026-03-25 02:18:18.135300 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-25 02:18:18.135307 | orchestrator | Wednesday 25 March 2026 02:18:14 +0000 (0:00:11.383) 0:00:19.689 ******* 2026-03-25 02:18:18.135313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:18:18.135403 | orchestrator | 2026-03-25 02:18:18.135410 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-25 02:18:18.135416 | orchestrator | Wednesday 25 March 2026 02:18:15 +0000 (0:00:01.433) 0:00:21.123 ******* 2026-03-25 02:18:18.135423 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:18:18.135429 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:18:18.135436 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:18:18.135442 | orchestrator | changed: [testbed-manager] 2026-03-25 02:18:18.135448 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:18:18.135454 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:18:18.135460 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:18:18.135467 | orchestrator | 2026-03-25 02:18:18.135473 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:18:18.135479 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:18:18.135504 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:18:18.135512 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:18:18.135518 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:18:18.135524 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:18:18.135531 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:18:18.135537 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:18:18.135543 | orchestrator | 2026-03-25 02:18:18.135549 | orchestrator | 2026-03-25 02:18:18.135555 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:18:18.135561 | orchestrator | Wednesday 25 March 2026 02:18:17 +0000 (0:00:02.022) 0:00:23.145 ******* 2026-03-25 02:18:18.135568 | orchestrator | =============================================================================== 2026-03-25 02:18:18.135574 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.38s 2026-03-25 02:18:18.135580 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.02s 2026-03-25 02:18:18.135586 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.93s 2026-03-25 02:18:18.135598 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.77s 2026-03-25 02:18:18.135604 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.43s 2026-03-25 02:18:18.135611 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.33s 2026-03-25 02:18:18.135617 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.30s 2026-03-25 02:18:18.135623 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.90s 2026-03-25 02:18:18.135629 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.78s 2026-03-25 02:18:18.531859 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-25 02:18:18.578511 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 02:18:18.578592 | orchestrator | + sudo systemctl restart manager.service 2026-03-25 02:18:36.465451 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-25 02:18:36.465560 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-25 02:18:36.465572 | orchestrator | + local max_attempts=60 2026-03-25 02:18:36.465580 | orchestrator | + local name=ceph-ansible 2026-03-25 02:18:36.465588 | orchestrator | + local attempt_num=1 2026-03-25 02:18:36.465596 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:18:36.493010 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:18:36.493086 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:18:36.493092 | orchestrator | + sleep 5 2026-03-25 02:18:41.497458 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:18:41.561296 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:18:41.561400 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:18:41.561409 | orchestrator | + sleep 5 2026-03-25 02:18:46.565154 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:18:46.591748 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:18:46.591849 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:18:46.591864 | orchestrator | + sleep 5 2026-03-25 02:18:51.594290 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:18:51.622515 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:18:51.622586 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:18:51.622592 | orchestrator | + sleep 5 2026-03-25 02:18:56.626836 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:18:56.661848 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:18:56.662155 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:18:56.662179 | orchestrator | + sleep 5 2026-03-25 02:19:01.668141 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:19:01.710813 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:01.710934 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:19:01.710950 | orchestrator | + sleep 5 2026-03-25 02:19:06.716904 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:19:06.762670 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:06.762770 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:19:06.762783 | orchestrator | + sleep 5 2026-03-25 02:19:11.766687 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:19:11.832162 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:11.832271 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:19:11.832288 | orchestrator | + sleep 5 2026-03-25 02:19:16.835817 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:19:16.881846 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:16.881967 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:19:16.881982 | orchestrator | + sleep 5 2026-03-25 02:19:21.885427 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:19:21.918728 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:21.918825 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:19:21.918836 | orchestrator | + sleep 5 2026-03-25 02:19:26.922253 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:19:26.954946 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:26.955044 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:19:26.955055 | orchestrator | + sleep 5 2026-03-25 02:19:31.959944 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:19:32.004951 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:32.005076 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:19:32.005101 | orchestrator | + sleep 5 2026-03-25 02:19:37.009431 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:19:37.047672 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:37.047780 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-25 02:19:37.047800 | orchestrator | + sleep 5 2026-03-25 02:19:42.051999 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-25 02:19:42.087627 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:42.087694 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-25 02:19:42.087701 | orchestrator | + local max_attempts=60 2026-03-25 02:19:42.087706 | orchestrator | + local name=kolla-ansible 2026-03-25 02:19:42.087710 | orchestrator | + local attempt_num=1 2026-03-25 02:19:42.088855 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-25 02:19:42.119507 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:42.119581 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-25 02:19:42.119620 | orchestrator | + local max_attempts=60 2026-03-25 02:19:42.119629 | orchestrator | + local name=osism-ansible 2026-03-25 02:19:42.119637 | orchestrator | + local attempt_num=1 2026-03-25 02:19:42.120668 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-25 02:19:42.159434 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-25 02:19:42.159504 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-25 02:19:42.159510 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-25 02:19:42.355688 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-25 02:19:42.522732 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-25 02:19:42.681729 | orchestrator | ARA in osism-ansible already disabled. 2026-03-25 02:19:42.849331 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-25 02:19:42.850597 | orchestrator | + osism apply gather-facts 2026-03-25 02:19:55.354518 | orchestrator | 2026-03-25 02:19:55 | INFO  | Task 2e891843-cfcf-49a8-96bc-58c432cb92c2 (gather-facts) was prepared for execution. 2026-03-25 02:19:55.354612 | orchestrator | 2026-03-25 02:19:55 | INFO  | It takes a moment until task 2e891843-cfcf-49a8-96bc-58c432cb92c2 (gather-facts) has been started and output is visible here. 2026-03-25 02:20:09.550509 | orchestrator | 2026-03-25 02:20:09.550591 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-25 02:20:09.550598 | orchestrator | 2026-03-25 02:20:09.550611 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-25 02:20:09.550617 | orchestrator | Wednesday 25 March 2026 02:20:00 +0000 (0:00:00.250) 0:00:00.250 ******* 2026-03-25 02:20:09.550622 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:20:09.550629 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:20:09.550641 | orchestrator | ok: [testbed-manager] 2026-03-25 02:20:09.550646 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:20:09.550650 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:20:09.550655 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:20:09.550660 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:20:09.550664 | orchestrator | 2026-03-25 02:20:09.550669 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-25 02:20:09.550673 | orchestrator | 2026-03-25 02:20:09.550678 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-25 02:20:09.550682 | orchestrator | Wednesday 25 March 2026 02:20:08 +0000 (0:00:08.154) 0:00:08.404 ******* 2026-03-25 02:20:09.550687 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:20:09.550693 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:20:09.550697 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:20:09.550702 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:20:09.550706 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:20:09.550711 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:20:09.550715 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:20:09.550720 | orchestrator | 2026-03-25 02:20:09.550724 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:20:09.550729 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:20:09.550740 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:20:09.550745 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:20:09.550749 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:20:09.550754 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:20:09.550758 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:20:09.550782 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 02:20:09.550787 | orchestrator | 2026-03-25 02:20:09.550791 | orchestrator | 2026-03-25 02:20:09.550795 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:20:09.550800 | orchestrator | Wednesday 25 March 2026 02:20:09 +0000 (0:00:00.679) 0:00:09.084 ******* 2026-03-25 02:20:09.550804 | orchestrator | =============================================================================== 2026-03-25 02:20:09.550809 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.15s 2026-03-25 02:20:09.550814 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.68s 2026-03-25 02:20:09.960046 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-25 02:20:09.975110 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-25 02:20:09.995335 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-25 02:20:10.009057 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-25 02:20:10.024478 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-25 02:20:10.042582 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-25 02:20:10.061179 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-25 02:20:10.078167 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-25 02:20:10.094840 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-25 02:20:10.111544 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-25 02:20:10.129830 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-25 02:20:10.151207 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-25 02:20:10.169645 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-25 02:20:10.189550 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-25 02:20:10.204687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-25 02:20:10.222092 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-25 02:20:10.235833 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-25 02:20:10.250117 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-25 02:20:10.268562 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-25 02:20:10.293006 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-25 02:20:10.308912 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-25 02:20:10.321146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-25 02:20:10.334526 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-25 02:20:10.347287 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-25 02:20:10.588718 | orchestrator | ok: Runtime: 0:25:01.287108 2026-03-25 02:20:10.709338 | 2026-03-25 02:20:10.709517 | TASK [Deploy services] 2026-03-25 02:20:11.435509 | orchestrator | 2026-03-25 02:20:11.435758 | orchestrator | # DEPLOY SERVICES 2026-03-25 02:20:11.435777 | orchestrator | 2026-03-25 02:20:11.435811 | orchestrator | + set -e 2026-03-25 02:20:11.435821 | orchestrator | + echo 2026-03-25 02:20:11.435828 | orchestrator | + echo '# DEPLOY SERVICES' 2026-03-25 02:20:11.435836 | orchestrator | + echo 2026-03-25 02:20:11.435863 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 02:20:11.435875 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 02:20:11.435884 | orchestrator | ++ INTERACTIVE=false 2026-03-25 02:20:11.435903 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 02:20:11.435922 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 02:20:11.435927 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 02:20:11.435933 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 02:20:11.435937 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 02:20:11.435944 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 02:20:11.435948 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 02:20:11.435953 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 02:20:11.435958 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 02:20:11.435964 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 02:20:11.435969 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 02:20:11.435973 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 02:20:11.435978 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 02:20:11.435981 | orchestrator | ++ export ARA=false 2026-03-25 02:20:11.435985 | orchestrator | ++ ARA=false 2026-03-25 02:20:11.435989 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 02:20:11.435993 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 02:20:11.435997 | orchestrator | ++ export TEMPEST=false 2026-03-25 02:20:11.436000 | orchestrator | ++ TEMPEST=false 2026-03-25 02:20:11.436004 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 02:20:11.436008 | orchestrator | ++ IS_ZUUL=true 2026-03-25 02:20:11.436012 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 02:20:11.436016 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 02:20:11.436020 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 02:20:11.436023 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 02:20:11.436027 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 02:20:11.436031 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 02:20:11.436034 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 02:20:11.436038 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 02:20:11.436043 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 02:20:11.436052 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 02:20:11.436057 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-25 02:20:11.444865 | orchestrator | 2026-03-25 02:20:11.444975 | orchestrator | # PULL IMAGES 2026-03-25 02:20:11.444991 | orchestrator | 2026-03-25 02:20:11.445035 | orchestrator | + set -e 2026-03-25 02:20:11.445051 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 02:20:11.445074 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 02:20:11.445096 | orchestrator | ++ INTERACTIVE=false 2026-03-25 02:20:11.445111 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 02:20:11.445126 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 02:20:11.445141 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 02:20:11.445156 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 02:20:11.445172 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 02:20:11.445188 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 02:20:11.445204 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 02:20:11.445220 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 02:20:11.445236 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 02:20:11.445252 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 02:20:11.445268 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 02:20:11.445285 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 02:20:11.445308 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 02:20:11.445323 | orchestrator | ++ export ARA=false 2026-03-25 02:20:11.445338 | orchestrator | ++ ARA=false 2026-03-25 02:20:11.445357 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 02:20:11.445371 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 02:20:11.445387 | orchestrator | ++ export TEMPEST=false 2026-03-25 02:20:11.445465 | orchestrator | ++ TEMPEST=false 2026-03-25 02:20:11.445481 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 02:20:11.445497 | orchestrator | ++ IS_ZUUL=true 2026-03-25 02:20:11.445514 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 02:20:11.445534 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 02:20:11.445548 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 02:20:11.445562 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 02:20:11.445576 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 02:20:11.445590 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 02:20:11.445641 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 02:20:11.445659 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 02:20:11.445675 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 02:20:11.445691 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 02:20:11.445707 | orchestrator | + echo 2026-03-25 02:20:11.445722 | orchestrator | + echo '# PULL IMAGES' 2026-03-25 02:20:11.445738 | orchestrator | + echo 2026-03-25 02:20:11.445923 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-25 02:20:11.512335 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 02:20:11.512452 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-25 02:20:13.684423 | orchestrator | 2026-03-25 02:20:13 | INFO  | Trying to run play pull-images in environment custom 2026-03-25 02:20:23.859159 | orchestrator | 2026-03-25 02:20:23 | INFO  | Task 7193b482-8a13-4f81-8030-967fe0f03e61 (pull-images) was prepared for execution. 2026-03-25 02:20:23.859256 | orchestrator | 2026-03-25 02:20:23 | INFO  | Task 7193b482-8a13-4f81-8030-967fe0f03e61 is running in background. No more output. Check ARA for logs. 2026-03-25 02:20:24.283414 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-03-25 02:20:36.705302 | orchestrator | 2026-03-25 02:20:36 | INFO  | Task 59ff1aa3-4404-46db-aa78-66b3a6cee968 (cgit) was prepared for execution. 2026-03-25 02:20:36.705516 | orchestrator | 2026-03-25 02:20:36 | INFO  | Task 59ff1aa3-4404-46db-aa78-66b3a6cee968 is running in background. No more output. Check ARA for logs. 2026-03-25 02:20:49.904672 | orchestrator | 2026-03-25 02:20:49 | INFO  | Task a1989027-31d8-469f-a279-b3bfc45d708f (dotfiles) was prepared for execution. 2026-03-25 02:20:49.904829 | orchestrator | 2026-03-25 02:20:49 | INFO  | Task a1989027-31d8-469f-a279-b3bfc45d708f is running in background. No more output. Check ARA for logs. 2026-03-25 02:21:02.853283 | orchestrator | 2026-03-25 02:21:02 | INFO  | Task 13351bd8-bc0e-40f8-a2a6-decc1ad32a06 (homer) was prepared for execution. 2026-03-25 02:21:02.853363 | orchestrator | 2026-03-25 02:21:02 | INFO  | Task 13351bd8-bc0e-40f8-a2a6-decc1ad32a06 is running in background. No more output. Check ARA for logs. 2026-03-25 02:21:15.933824 | orchestrator | 2026-03-25 02:21:15 | INFO  | Task 3e21e212-15e7-4b44-99f9-a5f7db286409 (phpmyadmin) was prepared for execution. 2026-03-25 02:21:15.933908 | orchestrator | 2026-03-25 02:21:15 | INFO  | Task 3e21e212-15e7-4b44-99f9-a5f7db286409 is running in background. No more output. Check ARA for logs. 2026-03-25 02:21:28.931813 | orchestrator | 2026-03-25 02:21:28 | INFO  | Task 0f1dd45f-fb13-4176-af96-242f451b93e9 (sosreport) was prepared for execution. 2026-03-25 02:21:28.931907 | orchestrator | 2026-03-25 02:21:28 | INFO  | Task 0f1dd45f-fb13-4176-af96-242f451b93e9 is running in background. No more output. Check ARA for logs. 2026-03-25 02:21:29.442904 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-03-25 02:21:29.452811 | orchestrator | + set -e 2026-03-25 02:21:29.452898 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 02:21:29.452908 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 02:21:29.452919 | orchestrator | ++ INTERACTIVE=false 2026-03-25 02:21:29.452931 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 02:21:29.452938 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 02:21:29.452945 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 02:21:29.452952 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 02:21:29.452959 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 02:21:29.452966 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 02:21:29.452973 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 02:21:29.452981 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 02:21:29.452989 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 02:21:29.452996 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 02:21:29.453003 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 02:21:29.453011 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 02:21:29.453018 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 02:21:29.453026 | orchestrator | ++ export ARA=false 2026-03-25 02:21:29.453033 | orchestrator | ++ ARA=false 2026-03-25 02:21:29.453040 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 02:21:29.453074 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 02:21:29.453081 | orchestrator | ++ export TEMPEST=false 2026-03-25 02:21:29.453088 | orchestrator | ++ TEMPEST=false 2026-03-25 02:21:29.453095 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 02:21:29.453102 | orchestrator | ++ IS_ZUUL=true 2026-03-25 02:21:29.453125 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 02:21:29.453137 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 02:21:29.453145 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 02:21:29.453152 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 02:21:29.453159 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 02:21:29.453166 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 02:21:29.453172 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 02:21:29.453176 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 02:21:29.453181 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 02:21:29.453188 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 02:21:29.453195 | orchestrator | ++ semver 9.5.0 8.0.3 2026-03-25 02:21:29.525959 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 02:21:29.526073 | orchestrator | + osism apply frr 2026-03-25 02:21:42.796571 | orchestrator | 2026-03-25 02:21:42 | INFO  | Task bebc8955-f4fc-41a3-991a-280e2f1879ef (frr) was prepared for execution. 2026-03-25 02:21:42.796677 | orchestrator | 2026-03-25 02:21:42 | INFO  | It takes a moment until task bebc8955-f4fc-41a3-991a-280e2f1879ef (frr) has been started and output is visible here. 2026-03-25 02:22:22.220355 | orchestrator | 2026-03-25 02:22:22.220568 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-25 02:22:22.220587 | orchestrator | 2026-03-25 02:22:22.220596 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-25 02:22:22.220611 | orchestrator | Wednesday 25 March 2026 02:21:49 +0000 (0:00:00.822) 0:00:00.822 ******* 2026-03-25 02:22:22.220620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 02:22:22.220629 | orchestrator | 2026-03-25 02:22:22.220637 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-25 02:22:22.220644 | orchestrator | Wednesday 25 March 2026 02:21:51 +0000 (0:00:01.123) 0:00:01.946 ******* 2026-03-25 02:22:22.220652 | orchestrator | changed: [testbed-manager] 2026-03-25 02:22:22.220660 | orchestrator | 2026-03-25 02:22:22.220668 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-25 02:22:22.220678 | orchestrator | Wednesday 25 March 2026 02:21:53 +0000 (0:00:02.262) 0:00:04.208 ******* 2026-03-25 02:22:22.220686 | orchestrator | changed: [testbed-manager] 2026-03-25 02:22:22.220693 | orchestrator | 2026-03-25 02:22:22.220701 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-25 02:22:22.220709 | orchestrator | Wednesday 25 March 2026 02:22:09 +0000 (0:00:15.638) 0:00:19.846 ******* 2026-03-25 02:22:22.220716 | orchestrator | ok: [testbed-manager] 2026-03-25 02:22:22.220725 | orchestrator | 2026-03-25 02:22:22.220732 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-25 02:22:22.220740 | orchestrator | Wednesday 25 March 2026 02:22:10 +0000 (0:00:01.413) 0:00:21.260 ******* 2026-03-25 02:22:22.220747 | orchestrator | changed: [testbed-manager] 2026-03-25 02:22:22.220754 | orchestrator | 2026-03-25 02:22:22.220762 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-25 02:22:22.220769 | orchestrator | Wednesday 25 March 2026 02:22:11 +0000 (0:00:00.932) 0:00:22.193 ******* 2026-03-25 02:22:22.220777 | orchestrator | ok: [testbed-manager] 2026-03-25 02:22:22.220784 | orchestrator | 2026-03-25 02:22:22.220791 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-25 02:22:22.220800 | orchestrator | Wednesday 25 March 2026 02:22:12 +0000 (0:00:01.239) 0:00:23.432 ******* 2026-03-25 02:22:22.220808 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:22:22.220816 | orchestrator | 2026-03-25 02:22:22.220823 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-25 02:22:22.220831 | orchestrator | Wednesday 25 March 2026 02:22:12 +0000 (0:00:00.159) 0:00:23.592 ******* 2026-03-25 02:22:22.220858 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:22:22.220867 | orchestrator | 2026-03-25 02:22:22.220874 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-25 02:22:22.220882 | orchestrator | Wednesday 25 March 2026 02:22:12 +0000 (0:00:00.143) 0:00:23.735 ******* 2026-03-25 02:22:22.220889 | orchestrator | changed: [testbed-manager] 2026-03-25 02:22:22.220896 | orchestrator | 2026-03-25 02:22:22.220903 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-25 02:22:22.220911 | orchestrator | Wednesday 25 March 2026 02:22:13 +0000 (0:00:01.034) 0:00:24.769 ******* 2026-03-25 02:22:22.220919 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-25 02:22:22.220926 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-25 02:22:22.220935 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-25 02:22:22.220942 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-25 02:22:22.220950 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-25 02:22:22.220957 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-25 02:22:22.220965 | orchestrator | 2026-03-25 02:22:22.220972 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-25 02:22:22.220979 | orchestrator | Wednesday 25 March 2026 02:22:18 +0000 (0:00:04.121) 0:00:28.891 ******* 2026-03-25 02:22:22.220987 | orchestrator | ok: [testbed-manager] 2026-03-25 02:22:22.220994 | orchestrator | 2026-03-25 02:22:22.221001 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-25 02:22:22.221009 | orchestrator | Wednesday 25 March 2026 02:22:20 +0000 (0:00:02.102) 0:00:30.994 ******* 2026-03-25 02:22:22.221016 | orchestrator | changed: [testbed-manager] 2026-03-25 02:22:22.221023 | orchestrator | 2026-03-25 02:22:22.221031 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:22:22.221039 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:22:22.221046 | orchestrator | 2026-03-25 02:22:22.221053 | orchestrator | 2026-03-25 02:22:22.221066 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:22:22.221074 | orchestrator | Wednesday 25 March 2026 02:22:21 +0000 (0:00:01.615) 0:00:32.610 ******* 2026-03-25 02:22:22.221081 | orchestrator | =============================================================================== 2026-03-25 02:22:22.221089 | orchestrator | osism.services.frr : Install frr package ------------------------------- 15.64s 2026-03-25 02:22:22.221096 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 4.12s 2026-03-25 02:22:22.221103 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.26s 2026-03-25 02:22:22.221111 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.10s 2026-03-25 02:22:22.221118 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.62s 2026-03-25 02:22:22.221142 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.41s 2026-03-25 02:22:22.221150 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.24s 2026-03-25 02:22:22.221158 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.12s 2026-03-25 02:22:22.221165 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.03s 2026-03-25 02:22:22.221172 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.93s 2026-03-25 02:22:22.221179 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-03-25 02:22:22.221187 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-03-25 02:22:22.679661 | orchestrator | + osism apply kubernetes 2026-03-25 02:22:25.319106 | orchestrator | 2026-03-25 02:22:25 | INFO  | Task db30a982-7de4-4f4b-bfa4-233be8cd6332 (kubernetes) was prepared for execution. 2026-03-25 02:22:25.319176 | orchestrator | 2026-03-25 02:22:25 | INFO  | It takes a moment until task db30a982-7de4-4f4b-bfa4-233be8cd6332 (kubernetes) has been started and output is visible here. 2026-03-25 02:22:54.430252 | orchestrator | 2026-03-25 02:22:54.430344 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-25 02:22:54.430354 | orchestrator | 2026-03-25 02:22:54.430361 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-25 02:22:54.430368 | orchestrator | Wednesday 25 March 2026 02:22:31 +0000 (0:00:00.225) 0:00:00.225 ******* 2026-03-25 02:22:54.430374 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:22:54.430381 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:22:54.430386 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:22:54.430392 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:22:54.430398 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:22:54.430404 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:22:54.430410 | orchestrator | 2026-03-25 02:22:54.430416 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-25 02:22:54.430421 | orchestrator | Wednesday 25 March 2026 02:22:32 +0000 (0:00:00.792) 0:00:01.018 ******* 2026-03-25 02:22:54.430428 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.430434 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.430440 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.430445 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.430451 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.430457 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.430529 | orchestrator | 2026-03-25 02:22:54.430536 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-25 02:22:54.430545 | orchestrator | Wednesday 25 March 2026 02:22:33 +0000 (0:00:00.731) 0:00:01.749 ******* 2026-03-25 02:22:54.430551 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.430557 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.430562 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.430568 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.430574 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.430580 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.430585 | orchestrator | 2026-03-25 02:22:54.430591 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-25 02:22:54.430597 | orchestrator | Wednesday 25 March 2026 02:22:34 +0000 (0:00:01.275) 0:00:03.024 ******* 2026-03-25 02:22:54.430603 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:22:54.430608 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:22:54.430614 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:22:54.430624 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:22:54.430629 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:22:54.430635 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:22:54.430641 | orchestrator | 2026-03-25 02:22:54.430646 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-25 02:22:54.430653 | orchestrator | Wednesday 25 March 2026 02:22:36 +0000 (0:00:01.743) 0:00:04.768 ******* 2026-03-25 02:22:54.430658 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:22:54.430664 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:22:54.430670 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:22:54.430676 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:22:54.430681 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:22:54.430687 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:22:54.430693 | orchestrator | 2026-03-25 02:22:54.430699 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-25 02:22:54.430705 | orchestrator | Wednesday 25 March 2026 02:22:37 +0000 (0:00:01.305) 0:00:06.073 ******* 2026-03-25 02:22:54.430711 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:22:54.430753 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:22:54.430760 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:22:54.430765 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:22:54.430771 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:22:54.430776 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:22:54.430782 | orchestrator | 2026-03-25 02:22:54.430794 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-25 02:22:54.430800 | orchestrator | Wednesday 25 March 2026 02:22:38 +0000 (0:00:01.128) 0:00:07.202 ******* 2026-03-25 02:22:54.430806 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.430813 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.430819 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.430826 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.430859 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.430866 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.430880 | orchestrator | 2026-03-25 02:22:54.430887 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-25 02:22:54.430894 | orchestrator | Wednesday 25 March 2026 02:22:39 +0000 (0:00:01.118) 0:00:08.320 ******* 2026-03-25 02:22:54.430900 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.430907 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.430913 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.430920 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.430926 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.430932 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.430939 | orchestrator | 2026-03-25 02:22:54.430945 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-25 02:22:54.430952 | orchestrator | Wednesday 25 March 2026 02:22:40 +0000 (0:00:00.750) 0:00:09.071 ******* 2026-03-25 02:22:54.430958 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 02:22:54.430965 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 02:22:54.430971 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.430978 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 02:22:54.430984 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 02:22:54.430991 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.430997 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 02:22:54.431004 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 02:22:54.431010 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.431017 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 02:22:54.431037 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 02:22:54.431044 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.431051 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 02:22:54.431057 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 02:22:54.431064 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.431070 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 02:22:54.431076 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 02:22:54.431083 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.431089 | orchestrator | 2026-03-25 02:22:54.431096 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-25 02:22:54.431102 | orchestrator | Wednesday 25 March 2026 02:22:41 +0000 (0:00:00.886) 0:00:09.957 ******* 2026-03-25 02:22:54.431109 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.431115 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.431121 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.431133 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.431140 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.431147 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.431153 | orchestrator | 2026-03-25 02:22:54.431160 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-25 02:22:54.431168 | orchestrator | Wednesday 25 March 2026 02:22:43 +0000 (0:00:01.647) 0:00:11.605 ******* 2026-03-25 02:22:54.431174 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:22:54.431181 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:22:54.431188 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:22:54.431194 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:22:54.431201 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:22:54.431207 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:22:54.431212 | orchestrator | 2026-03-25 02:22:54.431218 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-25 02:22:54.431224 | orchestrator | Wednesday 25 March 2026 02:22:44 +0000 (0:00:01.078) 0:00:12.683 ******* 2026-03-25 02:22:54.431229 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:22:54.431235 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:22:54.431241 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:22:54.431246 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:22:54.431252 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:22:54.431258 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:22:54.431263 | orchestrator | 2026-03-25 02:22:54.431269 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-25 02:22:54.431275 | orchestrator | Wednesday 25 March 2026 02:22:50 +0000 (0:00:05.876) 0:00:18.560 ******* 2026-03-25 02:22:54.431280 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.431290 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.431296 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.431302 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.431308 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.431313 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.431319 | orchestrator | 2026-03-25 02:22:54.431324 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-25 02:22:54.431330 | orchestrator | Wednesday 25 March 2026 02:22:51 +0000 (0:00:01.204) 0:00:19.765 ******* 2026-03-25 02:22:54.431336 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.431341 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.431347 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.431353 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.431358 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.431364 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.431369 | orchestrator | 2026-03-25 02:22:54.431375 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-25 02:22:54.431382 | orchestrator | Wednesday 25 March 2026 02:22:52 +0000 (0:00:01.465) 0:00:21.231 ******* 2026-03-25 02:22:54.431388 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.431394 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.431399 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.431405 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.431410 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.431416 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.431421 | orchestrator | 2026-03-25 02:22:54.431427 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-25 02:22:54.431433 | orchestrator | Wednesday 25 March 2026 02:22:53 +0000 (0:00:00.723) 0:00:21.955 ******* 2026-03-25 02:22:54.431438 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-25 02:22:54.431448 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-25 02:22:54.431454 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:22:54.431460 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-25 02:22:54.431559 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-25 02:22:54.431565 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:22:54.431571 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-25 02:22:54.431577 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-25 02:22:54.431582 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:22:54.431588 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-25 02:22:54.431593 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-25 02:22:54.431599 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:22:54.431605 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-25 02:22:54.431611 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-25 02:22:54.431616 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:22:54.431622 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-25 02:22:54.431627 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-25 02:22:54.431633 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:22:54.431639 | orchestrator | 2026-03-25 02:22:54.431645 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-25 02:22:54.431655 | orchestrator | Wednesday 25 March 2026 02:22:54 +0000 (0:00:00.958) 0:00:22.913 ******* 2026-03-25 02:24:11.518921 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:24:11.519145 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:24:11.519175 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:24:11.519192 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:24:11.519210 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:11.519229 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:11.519246 | orchestrator | 2026-03-25 02:24:11.519265 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-25 02:24:11.519285 | orchestrator | Wednesday 25 March 2026 02:22:55 +0000 (0:00:00.637) 0:00:23.551 ******* 2026-03-25 02:24:11.519303 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:24:11.519321 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:24:11.519340 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:24:11.519356 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:24:11.519374 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:11.519391 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:11.519408 | orchestrator | 2026-03-25 02:24:11.519425 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-25 02:24:11.519443 | orchestrator | 2026-03-25 02:24:11.519462 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-25 02:24:11.519481 | orchestrator | Wednesday 25 March 2026 02:22:56 +0000 (0:00:01.375) 0:00:24.926 ******* 2026-03-25 02:24:11.519524 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:11.519543 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:11.519561 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:11.519579 | orchestrator | 2026-03-25 02:24:11.519597 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-25 02:24:11.519615 | orchestrator | Wednesday 25 March 2026 02:22:59 +0000 (0:00:02.764) 0:00:27.691 ******* 2026-03-25 02:24:11.519633 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:11.519650 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:11.519669 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:11.519688 | orchestrator | 2026-03-25 02:24:11.519706 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-25 02:24:11.519724 | orchestrator | Wednesday 25 March 2026 02:23:00 +0000 (0:00:01.250) 0:00:28.941 ******* 2026-03-25 02:24:11.519742 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:11.519760 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:11.519777 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:11.519796 | orchestrator | 2026-03-25 02:24:11.519815 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-25 02:24:11.519833 | orchestrator | Wednesday 25 March 2026 02:23:01 +0000 (0:00:01.054) 0:00:29.996 ******* 2026-03-25 02:24:11.519881 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:11.519899 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:11.519916 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:11.519934 | orchestrator | 2026-03-25 02:24:11.519946 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-25 02:24:11.519956 | orchestrator | Wednesday 25 March 2026 02:23:02 +0000 (0:00:00.769) 0:00:30.765 ******* 2026-03-25 02:24:11.519965 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:24:11.519975 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:11.519984 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:11.519993 | orchestrator | 2026-03-25 02:24:11.520003 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-25 02:24:11.520032 | orchestrator | Wednesday 25 March 2026 02:23:02 +0000 (0:00:00.446) 0:00:31.212 ******* 2026-03-25 02:24:11.520043 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:11.520052 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:11.520061 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:11.520070 | orchestrator | 2026-03-25 02:24:11.520080 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-25 02:24:11.520089 | orchestrator | Wednesday 25 March 2026 02:23:03 +0000 (0:00:01.274) 0:00:32.486 ******* 2026-03-25 02:24:11.520099 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:11.520108 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:11.520117 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:11.520126 | orchestrator | 2026-03-25 02:24:11.520136 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-25 02:24:11.520145 | orchestrator | Wednesday 25 March 2026 02:23:05 +0000 (0:00:01.784) 0:00:34.271 ******* 2026-03-25 02:24:11.520154 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:24:11.520164 | orchestrator | 2026-03-25 02:24:11.520173 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-25 02:24:11.520183 | orchestrator | Wednesday 25 March 2026 02:23:06 +0000 (0:00:00.574) 0:00:34.845 ******* 2026-03-25 02:24:11.520192 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:11.520201 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:11.520211 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:11.520220 | orchestrator | 2026-03-25 02:24:11.520229 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-25 02:24:11.520239 | orchestrator | Wednesday 25 March 2026 02:23:08 +0000 (0:00:02.274) 0:00:37.120 ******* 2026-03-25 02:24:11.520248 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:11.520257 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:11.520266 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:11.520276 | orchestrator | 2026-03-25 02:24:11.520285 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-25 02:24:11.520295 | orchestrator | Wednesday 25 March 2026 02:23:09 +0000 (0:00:00.571) 0:00:37.692 ******* 2026-03-25 02:24:11.520304 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:11.520314 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:11.520323 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:11.520332 | orchestrator | 2026-03-25 02:24:11.520341 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-25 02:24:11.520351 | orchestrator | Wednesday 25 March 2026 02:23:10 +0000 (0:00:00.907) 0:00:38.599 ******* 2026-03-25 02:24:11.520360 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:11.520369 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:11.520379 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:11.520388 | orchestrator | 2026-03-25 02:24:11.520398 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-25 02:24:11.520429 | orchestrator | Wednesday 25 March 2026 02:23:11 +0000 (0:00:01.270) 0:00:39.870 ******* 2026-03-25 02:24:11.520439 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:24:11.520459 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:11.520469 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:11.520478 | orchestrator | 2026-03-25 02:24:11.520515 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-25 02:24:11.520531 | orchestrator | Wednesday 25 March 2026 02:23:11 +0000 (0:00:00.590) 0:00:40.461 ******* 2026-03-25 02:24:11.520548 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:24:11.520565 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:11.520582 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:11.520592 | orchestrator | 2026-03-25 02:24:11.520602 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-25 02:24:11.520611 | orchestrator | Wednesday 25 March 2026 02:23:12 +0000 (0:00:00.328) 0:00:40.790 ******* 2026-03-25 02:24:11.520621 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:11.520630 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:11.520639 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:11.520648 | orchestrator | 2026-03-25 02:24:11.520665 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-25 02:24:11.520675 | orchestrator | Wednesday 25 March 2026 02:23:13 +0000 (0:00:01.234) 0:00:42.025 ******* 2026-03-25 02:24:11.520684 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:11.520693 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:11.520703 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:11.520712 | orchestrator | 2026-03-25 02:24:11.520721 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-25 02:24:11.520731 | orchestrator | Wednesday 25 March 2026 02:23:16 +0000 (0:00:02.647) 0:00:44.673 ******* 2026-03-25 02:24:11.520740 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:11.520749 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:11.520758 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:11.520772 | orchestrator | 2026-03-25 02:24:11.520782 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-25 02:24:11.520792 | orchestrator | Wednesday 25 March 2026 02:23:16 +0000 (0:00:00.373) 0:00:45.046 ******* 2026-03-25 02:24:11.520802 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-25 02:24:11.520814 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-25 02:24:11.520824 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-25 02:24:11.520834 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-25 02:24:11.520843 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-25 02:24:11.520852 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-25 02:24:11.520862 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-25 02:24:11.520877 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-25 02:24:11.520892 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-25 02:24:11.520907 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-25 02:24:11.520922 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-25 02:24:11.520949 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-25 02:24:11.520961 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-25 02:24:11.520971 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-25 02:24:11.520980 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-25 02:24:11.520990 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:11.520999 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:11.521009 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:11.521018 | orchestrator | 2026-03-25 02:24:11.521033 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-25 02:24:11.521043 | orchestrator | Wednesday 25 March 2026 02:24:10 +0000 (0:00:53.676) 0:01:38.723 ******* 2026-03-25 02:24:11.521052 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:24:11.521062 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:11.521071 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:11.521080 | orchestrator | 2026-03-25 02:24:11.521090 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-25 02:24:11.521099 | orchestrator | Wednesday 25 March 2026 02:24:10 +0000 (0:00:00.354) 0:01:39.078 ******* 2026-03-25 02:24:11.521117 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:52.685820 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:52.685910 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:52.685917 | orchestrator | 2026-03-25 02:24:52.685922 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-25 02:24:52.685928 | orchestrator | Wednesday 25 March 2026 02:24:11 +0000 (0:00:00.935) 0:01:40.013 ******* 2026-03-25 02:24:52.685932 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:52.685936 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:52.685940 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:52.685944 | orchestrator | 2026-03-25 02:24:52.685948 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-25 02:24:52.685953 | orchestrator | Wednesday 25 March 2026 02:24:12 +0000 (0:00:01.163) 0:01:41.176 ******* 2026-03-25 02:24:52.685957 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:52.685961 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:52.685965 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:52.685968 | orchestrator | 2026-03-25 02:24:52.685972 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-25 02:24:52.685976 | orchestrator | Wednesday 25 March 2026 02:24:37 +0000 (0:00:25.023) 0:02:06.200 ******* 2026-03-25 02:24:52.685980 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:52.685985 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:52.685989 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:52.685992 | orchestrator | 2026-03-25 02:24:52.685996 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-25 02:24:52.686000 | orchestrator | Wednesday 25 March 2026 02:24:38 +0000 (0:00:00.636) 0:02:06.836 ******* 2026-03-25 02:24:52.686010 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:52.686038 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:52.686043 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:52.686047 | orchestrator | 2026-03-25 02:24:52.686051 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-25 02:24:52.686054 | orchestrator | Wednesday 25 March 2026 02:24:38 +0000 (0:00:00.623) 0:02:07.459 ******* 2026-03-25 02:24:52.686058 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:52.686062 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:52.686066 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:52.686069 | orchestrator | 2026-03-25 02:24:52.686073 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-25 02:24:52.686097 | orchestrator | Wednesday 25 March 2026 02:24:39 +0000 (0:00:00.625) 0:02:08.085 ******* 2026-03-25 02:24:52.686102 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:52.686105 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:52.686109 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:52.686113 | orchestrator | 2026-03-25 02:24:52.686117 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-25 02:24:52.686120 | orchestrator | Wednesday 25 March 2026 02:24:40 +0000 (0:00:00.880) 0:02:08.966 ******* 2026-03-25 02:24:52.686124 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:52.686128 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:52.686132 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:52.686135 | orchestrator | 2026-03-25 02:24:52.686139 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-25 02:24:52.686148 | orchestrator | Wednesday 25 March 2026 02:24:40 +0000 (0:00:00.346) 0:02:09.312 ******* 2026-03-25 02:24:52.686152 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:52.686155 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:52.686159 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:52.686163 | orchestrator | 2026-03-25 02:24:52.686167 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-25 02:24:52.686170 | orchestrator | Wednesday 25 March 2026 02:24:41 +0000 (0:00:00.628) 0:02:09.941 ******* 2026-03-25 02:24:52.686174 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:52.686178 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:52.686182 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:52.686186 | orchestrator | 2026-03-25 02:24:52.686189 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-25 02:24:52.686193 | orchestrator | Wednesday 25 March 2026 02:24:42 +0000 (0:00:00.644) 0:02:10.585 ******* 2026-03-25 02:24:52.686197 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:52.686201 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:52.686204 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:52.686208 | orchestrator | 2026-03-25 02:24:52.686213 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-25 02:24:52.686216 | orchestrator | Wednesday 25 March 2026 02:24:42 +0000 (0:00:00.880) 0:02:11.466 ******* 2026-03-25 02:24:52.686222 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:24:52.686226 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:24:52.686230 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:24:52.686234 | orchestrator | 2026-03-25 02:24:52.686238 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-25 02:24:52.686241 | orchestrator | Wednesday 25 March 2026 02:24:44 +0000 (0:00:01.143) 0:02:12.610 ******* 2026-03-25 02:24:52.686245 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:24:52.686249 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:52.686253 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:52.686256 | orchestrator | 2026-03-25 02:24:52.686260 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-25 02:24:52.686264 | orchestrator | Wednesday 25 March 2026 02:24:44 +0000 (0:00:00.319) 0:02:12.929 ******* 2026-03-25 02:24:52.686268 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:24:52.686271 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:24:52.686275 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:24:52.686279 | orchestrator | 2026-03-25 02:24:52.686283 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-25 02:24:52.686286 | orchestrator | Wednesday 25 March 2026 02:24:44 +0000 (0:00:00.319) 0:02:13.248 ******* 2026-03-25 02:24:52.686290 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:52.686294 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:52.686298 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:52.686302 | orchestrator | 2026-03-25 02:24:52.686305 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-25 02:24:52.686309 | orchestrator | Wednesday 25 March 2026 02:24:45 +0000 (0:00:00.687) 0:02:13.936 ******* 2026-03-25 02:24:52.686317 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:24:52.686321 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:24:52.686337 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:24:52.686341 | orchestrator | 2026-03-25 02:24:52.686345 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-25 02:24:52.686350 | orchestrator | Wednesday 25 March 2026 02:24:46 +0000 (0:00:00.923) 0:02:14.859 ******* 2026-03-25 02:24:52.686354 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-25 02:24:52.686358 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-25 02:24:52.686362 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-25 02:24:52.686366 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-25 02:24:52.686370 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-25 02:24:52.686373 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-25 02:24:52.686377 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-25 02:24:52.686382 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-25 02:24:52.686385 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-25 02:24:52.686389 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-25 02:24:52.686393 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-25 02:24:52.686397 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-25 02:24:52.686401 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-25 02:24:52.686404 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-25 02:24:52.686408 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-25 02:24:52.686412 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-25 02:24:52.686416 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-25 02:24:52.686419 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-25 02:24:52.686423 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-25 02:24:52.686427 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-25 02:24:52.686431 | orchestrator | 2026-03-25 02:24:52.686434 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-25 02:24:52.686438 | orchestrator | 2026-03-25 02:24:52.686442 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-25 02:24:52.686446 | orchestrator | Wednesday 25 March 2026 02:24:49 +0000 (0:00:02.867) 0:02:17.726 ******* 2026-03-25 02:24:52.686450 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:24:52.686454 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:24:52.686457 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:24:52.686461 | orchestrator | 2026-03-25 02:24:52.686473 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-25 02:24:52.686477 | orchestrator | Wednesday 25 March 2026 02:24:49 +0000 (0:00:00.347) 0:02:18.074 ******* 2026-03-25 02:24:52.686480 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:24:52.686484 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:24:52.686488 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:24:52.686535 | orchestrator | 2026-03-25 02:24:52.686543 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-25 02:24:52.686550 | orchestrator | Wednesday 25 March 2026 02:24:50 +0000 (0:00:00.907) 0:02:18.981 ******* 2026-03-25 02:24:52.686556 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:24:52.686563 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:24:52.686569 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:24:52.686572 | orchestrator | 2026-03-25 02:24:52.686576 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-25 02:24:52.686580 | orchestrator | Wednesday 25 March 2026 02:24:50 +0000 (0:00:00.350) 0:02:19.331 ******* 2026-03-25 02:24:52.686583 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:24:52.686587 | orchestrator | 2026-03-25 02:24:52.686591 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-25 02:24:52.686595 | orchestrator | Wednesday 25 March 2026 02:24:51 +0000 (0:00:00.726) 0:02:20.058 ******* 2026-03-25 02:24:52.686599 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:24:52.686602 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:24:52.686606 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:24:52.686610 | orchestrator | 2026-03-25 02:24:52.686613 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-25 02:24:52.686617 | orchestrator | Wednesday 25 March 2026 02:24:52 +0000 (0:00:00.591) 0:02:20.649 ******* 2026-03-25 02:24:52.686621 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:24:52.686624 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:24:52.686628 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:24:52.686632 | orchestrator | 2026-03-25 02:24:52.686635 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-25 02:24:52.686639 | orchestrator | Wednesday 25 March 2026 02:24:52 +0000 (0:00:00.335) 0:02:20.985 ******* 2026-03-25 02:24:52.686646 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:26:34.645729 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:26:34.645837 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:26:34.645852 | orchestrator | 2026-03-25 02:26:34.645863 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-25 02:26:34.645875 | orchestrator | Wednesday 25 March 2026 02:24:52 +0000 (0:00:00.338) 0:02:21.324 ******* 2026-03-25 02:26:34.645884 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:26:34.645894 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:26:34.645903 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:26:34.645912 | orchestrator | 2026-03-25 02:26:34.645922 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-25 02:26:34.645932 | orchestrator | Wednesday 25 March 2026 02:24:53 +0000 (0:00:00.655) 0:02:21.979 ******* 2026-03-25 02:26:34.645941 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:26:34.645950 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:26:34.645959 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:26:34.645968 | orchestrator | 2026-03-25 02:26:34.645977 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-25 02:26:34.645987 | orchestrator | Wednesday 25 March 2026 02:24:54 +0000 (0:00:01.442) 0:02:23.422 ******* 2026-03-25 02:26:34.645997 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:26:34.646006 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:26:34.646079 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:26:34.646090 | orchestrator | 2026-03-25 02:26:34.646102 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-25 02:26:34.646112 | orchestrator | Wednesday 25 March 2026 02:24:56 +0000 (0:00:01.253) 0:02:24.676 ******* 2026-03-25 02:26:34.646121 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:26:34.646139 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:26:34.646149 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:26:34.646160 | orchestrator | 2026-03-25 02:26:34.646171 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-25 02:26:34.646221 | orchestrator | 2026-03-25 02:26:34.646233 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-25 02:26:34.646243 | orchestrator | Wednesday 25 March 2026 02:25:05 +0000 (0:00:09.778) 0:02:34.454 ******* 2026-03-25 02:26:34.646253 | orchestrator | ok: [testbed-manager] 2026-03-25 02:26:34.646263 | orchestrator | 2026-03-25 02:26:34.646273 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-25 02:26:34.646282 | orchestrator | Wednesday 25 March 2026 02:25:07 +0000 (0:00:01.076) 0:02:35.531 ******* 2026-03-25 02:26:34.646291 | orchestrator | changed: [testbed-manager] 2026-03-25 02:26:34.646301 | orchestrator | 2026-03-25 02:26:34.646310 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-25 02:26:34.646319 | orchestrator | Wednesday 25 March 2026 02:25:07 +0000 (0:00:00.519) 0:02:36.051 ******* 2026-03-25 02:26:34.646330 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-25 02:26:34.646340 | orchestrator | 2026-03-25 02:26:34.646349 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-25 02:26:34.646359 | orchestrator | Wednesday 25 March 2026 02:25:08 +0000 (0:00:00.522) 0:02:36.573 ******* 2026-03-25 02:26:34.646368 | orchestrator | changed: [testbed-manager] 2026-03-25 02:26:34.646396 | orchestrator | 2026-03-25 02:26:34.646406 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-25 02:26:34.646415 | orchestrator | Wednesday 25 March 2026 02:25:09 +0000 (0:00:00.972) 0:02:37.546 ******* 2026-03-25 02:26:34.646424 | orchestrator | changed: [testbed-manager] 2026-03-25 02:26:34.646434 | orchestrator | 2026-03-25 02:26:34.646444 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-25 02:26:34.646453 | orchestrator | Wednesday 25 March 2026 02:25:09 +0000 (0:00:00.704) 0:02:38.251 ******* 2026-03-25 02:26:34.646462 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-25 02:26:34.646472 | orchestrator | 2026-03-25 02:26:34.646502 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-25 02:26:34.646511 | orchestrator | Wednesday 25 March 2026 02:25:11 +0000 (0:00:01.810) 0:02:40.061 ******* 2026-03-25 02:26:34.646520 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-25 02:26:34.646528 | orchestrator | 2026-03-25 02:26:34.646554 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-25 02:26:34.646568 | orchestrator | Wednesday 25 March 2026 02:25:12 +0000 (0:00:00.895) 0:02:40.956 ******* 2026-03-25 02:26:34.646579 | orchestrator | changed: [testbed-manager] 2026-03-25 02:26:34.646588 | orchestrator | 2026-03-25 02:26:34.646597 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-25 02:26:34.646606 | orchestrator | Wednesday 25 March 2026 02:25:12 +0000 (0:00:00.469) 0:02:41.425 ******* 2026-03-25 02:26:34.646616 | orchestrator | changed: [testbed-manager] 2026-03-25 02:26:34.646625 | orchestrator | 2026-03-25 02:26:34.646635 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-25 02:26:34.646644 | orchestrator | 2026-03-25 02:26:34.646653 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-25 02:26:34.646663 | orchestrator | Wednesday 25 March 2026 02:25:13 +0000 (0:00:00.545) 0:02:41.971 ******* 2026-03-25 02:26:34.646672 | orchestrator | ok: [testbed-manager] 2026-03-25 02:26:34.646681 | orchestrator | 2026-03-25 02:26:34.646691 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-25 02:26:34.646701 | orchestrator | Wednesday 25 March 2026 02:25:13 +0000 (0:00:00.397) 0:02:42.369 ******* 2026-03-25 02:26:34.646710 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 02:26:34.646721 | orchestrator | 2026-03-25 02:26:34.646730 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-25 02:26:34.646739 | orchestrator | Wednesday 25 March 2026 02:25:14 +0000 (0:00:00.288) 0:02:42.658 ******* 2026-03-25 02:26:34.646748 | orchestrator | ok: [testbed-manager] 2026-03-25 02:26:34.646758 | orchestrator | 2026-03-25 02:26:34.646777 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-25 02:26:34.646786 | orchestrator | Wednesday 25 March 2026 02:25:15 +0000 (0:00:01.053) 0:02:43.711 ******* 2026-03-25 02:26:34.646795 | orchestrator | ok: [testbed-manager] 2026-03-25 02:26:34.646804 | orchestrator | 2026-03-25 02:26:34.646834 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-25 02:26:34.646843 | orchestrator | Wednesday 25 March 2026 02:25:17 +0000 (0:00:01.885) 0:02:45.597 ******* 2026-03-25 02:26:34.646853 | orchestrator | changed: [testbed-manager] 2026-03-25 02:26:34.646863 | orchestrator | 2026-03-25 02:26:34.646872 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-25 02:26:34.646882 | orchestrator | Wednesday 25 March 2026 02:25:17 +0000 (0:00:00.838) 0:02:46.435 ******* 2026-03-25 02:26:34.646892 | orchestrator | ok: [testbed-manager] 2026-03-25 02:26:34.646901 | orchestrator | 2026-03-25 02:26:34.646909 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-25 02:26:34.646919 | orchestrator | Wednesday 25 March 2026 02:25:18 +0000 (0:00:00.501) 0:02:46.937 ******* 2026-03-25 02:26:34.646928 | orchestrator | changed: [testbed-manager] 2026-03-25 02:26:34.646938 | orchestrator | 2026-03-25 02:26:34.646947 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-25 02:26:34.646956 | orchestrator | Wednesday 25 March 2026 02:25:27 +0000 (0:00:08.986) 0:02:55.924 ******* 2026-03-25 02:26:34.646966 | orchestrator | changed: [testbed-manager] 2026-03-25 02:26:34.646975 | orchestrator | 2026-03-25 02:26:34.646984 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-25 02:26:34.646994 | orchestrator | Wednesday 25 March 2026 02:25:40 +0000 (0:00:13.224) 0:03:09.149 ******* 2026-03-25 02:26:34.647003 | orchestrator | ok: [testbed-manager] 2026-03-25 02:26:34.647013 | orchestrator | 2026-03-25 02:26:34.647023 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-25 02:26:34.647033 | orchestrator | 2026-03-25 02:26:34.647042 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-25 02:26:34.647052 | orchestrator | Wednesday 25 March 2026 02:25:41 +0000 (0:00:00.833) 0:03:09.982 ******* 2026-03-25 02:26:34.647061 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:26:34.647070 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:26:34.647079 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:26:34.647089 | orchestrator | 2026-03-25 02:26:34.647098 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-25 02:26:34.647107 | orchestrator | Wednesday 25 March 2026 02:25:41 +0000 (0:00:00.363) 0:03:10.346 ******* 2026-03-25 02:26:34.647116 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:34.647126 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:26:34.647136 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:26:34.647145 | orchestrator | 2026-03-25 02:26:34.647154 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-25 02:26:34.647163 | orchestrator | Wednesday 25 March 2026 02:25:42 +0000 (0:00:00.332) 0:03:10.678 ******* 2026-03-25 02:26:34.647173 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:26:34.647182 | orchestrator | 2026-03-25 02:26:34.647192 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-25 02:26:34.647201 | orchestrator | Wednesday 25 March 2026 02:25:42 +0000 (0:00:00.781) 0:03:11.460 ******* 2026-03-25 02:26:34.647210 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-25 02:26:34.647219 | orchestrator | 2026-03-25 02:26:34.647229 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-25 02:26:34.647238 | orchestrator | Wednesday 25 March 2026 02:25:43 +0000 (0:00:00.931) 0:03:12.391 ******* 2026-03-25 02:26:34.647248 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 02:26:34.647257 | orchestrator | 2026-03-25 02:26:34.647266 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-25 02:26:34.647284 | orchestrator | Wednesday 25 March 2026 02:25:44 +0000 (0:00:00.953) 0:03:13.345 ******* 2026-03-25 02:26:34.647295 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:34.647305 | orchestrator | 2026-03-25 02:26:34.647314 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-25 02:26:34.647322 | orchestrator | Wednesday 25 March 2026 02:25:44 +0000 (0:00:00.138) 0:03:13.483 ******* 2026-03-25 02:26:34.647331 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 02:26:34.647340 | orchestrator | 2026-03-25 02:26:34.647349 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-25 02:26:34.647358 | orchestrator | Wednesday 25 March 2026 02:25:46 +0000 (0:00:01.129) 0:03:14.612 ******* 2026-03-25 02:26:34.647366 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:34.647376 | orchestrator | 2026-03-25 02:26:34.647385 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-25 02:26:34.647393 | orchestrator | Wednesday 25 March 2026 02:25:46 +0000 (0:00:00.119) 0:03:14.731 ******* 2026-03-25 02:26:34.647402 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:34.647411 | orchestrator | 2026-03-25 02:26:34.647420 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-25 02:26:34.647430 | orchestrator | Wednesday 25 March 2026 02:25:46 +0000 (0:00:00.137) 0:03:14.869 ******* 2026-03-25 02:26:34.647439 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:34.647448 | orchestrator | 2026-03-25 02:26:34.647457 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-25 02:26:34.647472 | orchestrator | Wednesday 25 March 2026 02:25:46 +0000 (0:00:00.131) 0:03:15.000 ******* 2026-03-25 02:26:34.647510 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:34.647520 | orchestrator | 2026-03-25 02:26:34.647529 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-25 02:26:34.647540 | orchestrator | Wednesday 25 March 2026 02:25:46 +0000 (0:00:00.145) 0:03:15.145 ******* 2026-03-25 02:26:34.647549 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-25 02:26:34.647559 | orchestrator | 2026-03-25 02:26:34.647569 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-25 02:26:34.647579 | orchestrator | Wednesday 25 March 2026 02:25:51 +0000 (0:00:05.309) 0:03:20.455 ******* 2026-03-25 02:26:34.647590 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-25 02:26:34.647601 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-25 02:26:34.647621 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-25 02:26:59.683261 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-25 02:26:59.683379 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-25 02:26:59.683395 | orchestrator | 2026-03-25 02:26:59.683408 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-25 02:26:59.683419 | orchestrator | Wednesday 25 March 2026 02:26:34 +0000 (0:00:42.679) 0:04:03.134 ******* 2026-03-25 02:26:59.683431 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 02:26:59.683442 | orchestrator | 2026-03-25 02:26:59.683453 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-25 02:26:59.683464 | orchestrator | Wednesday 25 March 2026 02:26:35 +0000 (0:00:01.344) 0:04:04.479 ******* 2026-03-25 02:26:59.683539 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-25 02:26:59.683562 | orchestrator | 2026-03-25 02:26:59.683580 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-25 02:26:59.683598 | orchestrator | Wednesday 25 March 2026 02:26:37 +0000 (0:00:01.984) 0:04:06.463 ******* 2026-03-25 02:26:59.683609 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-25 02:26:59.683620 | orchestrator | 2026-03-25 02:26:59.683631 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-25 02:26:59.683643 | orchestrator | Wednesday 25 March 2026 02:26:39 +0000 (0:00:01.129) 0:04:07.593 ******* 2026-03-25 02:26:59.683682 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:59.683694 | orchestrator | 2026-03-25 02:26:59.683705 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-25 02:26:59.683719 | orchestrator | Wednesday 25 March 2026 02:26:39 +0000 (0:00:00.155) 0:04:07.748 ******* 2026-03-25 02:26:59.683737 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-25 02:26:59.683767 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-25 02:26:59.683788 | orchestrator | 2026-03-25 02:26:59.683805 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-25 02:26:59.683823 | orchestrator | Wednesday 25 March 2026 02:26:41 +0000 (0:00:02.056) 0:04:09.805 ******* 2026-03-25 02:26:59.683840 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:59.683859 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:26:59.683875 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:26:59.683891 | orchestrator | 2026-03-25 02:26:59.683908 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-25 02:26:59.683926 | orchestrator | Wednesday 25 March 2026 02:26:41 +0000 (0:00:00.352) 0:04:10.157 ******* 2026-03-25 02:26:59.683943 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:26:59.683960 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:26:59.683975 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:26:59.683992 | orchestrator | 2026-03-25 02:26:59.684010 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-25 02:26:59.684026 | orchestrator | 2026-03-25 02:26:59.684043 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-25 02:26:59.684060 | orchestrator | Wednesday 25 March 2026 02:26:42 +0000 (0:00:00.838) 0:04:10.996 ******* 2026-03-25 02:26:59.684076 | orchestrator | ok: [testbed-manager] 2026-03-25 02:26:59.684093 | orchestrator | 2026-03-25 02:26:59.684111 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-25 02:26:59.684130 | orchestrator | Wednesday 25 March 2026 02:26:42 +0000 (0:00:00.380) 0:04:11.376 ******* 2026-03-25 02:26:59.684148 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 02:26:59.684165 | orchestrator | 2026-03-25 02:26:59.684181 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-25 02:26:59.684198 | orchestrator | Wednesday 25 March 2026 02:26:43 +0000 (0:00:00.277) 0:04:11.653 ******* 2026-03-25 02:26:59.684214 | orchestrator | changed: [testbed-manager] 2026-03-25 02:26:59.684232 | orchestrator | 2026-03-25 02:26:59.684249 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-25 02:26:59.684268 | orchestrator | 2026-03-25 02:26:59.684287 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-25 02:26:59.684307 | orchestrator | Wednesday 25 March 2026 02:26:48 +0000 (0:00:05.413) 0:04:17.067 ******* 2026-03-25 02:26:59.684326 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:26:59.684344 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:26:59.684361 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:26:59.684378 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:26:59.684397 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:26:59.684415 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:26:59.684433 | orchestrator | 2026-03-25 02:26:59.684452 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-25 02:26:59.684469 | orchestrator | Wednesday 25 March 2026 02:26:49 +0000 (0:00:00.857) 0:04:17.924 ******* 2026-03-25 02:26:59.684518 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-25 02:26:59.684536 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-25 02:26:59.684551 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-25 02:26:59.684569 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-25 02:26:59.684606 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-25 02:26:59.684623 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-25 02:26:59.684639 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-25 02:26:59.684657 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-25 02:26:59.684676 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-25 02:26:59.684721 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-25 02:26:59.684740 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-25 02:26:59.684759 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-25 02:26:59.684778 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-25 02:26:59.684796 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-25 02:26:59.684813 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-25 02:26:59.684854 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-25 02:26:59.684874 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-25 02:26:59.684892 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-25 02:26:59.684910 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-25 02:26:59.684928 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-25 02:26:59.684946 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-25 02:26:59.684964 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-25 02:26:59.684982 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-25 02:26:59.685000 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-25 02:26:59.685018 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-25 02:26:59.685036 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-25 02:26:59.685054 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-25 02:26:59.685072 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-25 02:26:59.685090 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-25 02:26:59.685108 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-25 02:26:59.685126 | orchestrator | 2026-03-25 02:26:59.685144 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-25 02:26:59.685163 | orchestrator | Wednesday 25 March 2026 02:26:58 +0000 (0:00:08.928) 0:04:26.853 ******* 2026-03-25 02:26:59.685181 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:26:59.685199 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:26:59.685217 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:26:59.685236 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:59.685255 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:26:59.685274 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:26:59.685293 | orchestrator | 2026-03-25 02:26:59.685312 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-25 02:26:59.685330 | orchestrator | Wednesday 25 March 2026 02:26:58 +0000 (0:00:00.584) 0:04:27.437 ******* 2026-03-25 02:26:59.685349 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:26:59.685377 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:26:59.685395 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:26:59.685413 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:26:59.685431 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:26:59.685449 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:26:59.685467 | orchestrator | 2026-03-25 02:26:59.685564 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:26:59.685583 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:26:59.685604 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-25 02:26:59.685623 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-25 02:26:59.685641 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-25 02:26:59.685658 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 02:26:59.685676 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 02:26:59.685693 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 02:26:59.685710 | orchestrator | 2026-03-25 02:26:59.685728 | orchestrator | 2026-03-25 02:26:59.685746 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:26:59.685763 | orchestrator | Wednesday 25 March 2026 02:26:59 +0000 (0:00:00.730) 0:04:28.168 ******* 2026-03-25 02:26:59.685793 | orchestrator | =============================================================================== 2026-03-25 02:27:00.132450 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.68s 2026-03-25 02:27:00.132656 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.68s 2026-03-25 02:27:00.132674 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.02s 2026-03-25 02:27:00.132687 | orchestrator | kubectl : Install required packages ------------------------------------ 13.22s 2026-03-25 02:27:00.132699 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.78s 2026-03-25 02:27:00.132710 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.99s 2026-03-25 02:27:00.132722 | orchestrator | Manage labels ----------------------------------------------------------- 8.93s 2026-03-25 02:27:00.132733 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.88s 2026-03-25 02:27:00.132744 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.41s 2026-03-25 02:27:00.132755 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.31s 2026-03-25 02:27:00.132766 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.87s 2026-03-25 02:27:00.132780 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.76s 2026-03-25 02:27:00.132792 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.65s 2026-03-25 02:27:00.132803 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.27s 2026-03-25 02:27:00.132814 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.06s 2026-03-25 02:27:00.132825 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.98s 2026-03-25 02:27:00.132836 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.89s 2026-03-25 02:27:00.132881 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.81s 2026-03-25 02:27:00.132893 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.78s 2026-03-25 02:27:00.132905 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.74s 2026-03-25 02:27:00.562471 | orchestrator | + osism apply copy-kubeconfig 2026-03-25 02:27:12.880546 | orchestrator | 2026-03-25 02:27:12 | INFO  | Task 98afdad5-0fea-4726-9f51-5ede8865b5f0 (copy-kubeconfig) was prepared for execution. 2026-03-25 02:27:12.880628 | orchestrator | 2026-03-25 02:27:12 | INFO  | It takes a moment until task 98afdad5-0fea-4726-9f51-5ede8865b5f0 (copy-kubeconfig) has been started and output is visible here. 2026-03-25 02:27:20.734696 | orchestrator | 2026-03-25 02:27:20.734772 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-25 02:27:20.734779 | orchestrator | 2026-03-25 02:27:20.734784 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-25 02:27:20.734788 | orchestrator | Wednesday 25 March 2026 02:27:17 +0000 (0:00:00.170) 0:00:00.170 ******* 2026-03-25 02:27:20.734793 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-25 02:27:20.734797 | orchestrator | 2026-03-25 02:27:20.734802 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-25 02:27:20.734819 | orchestrator | Wednesday 25 March 2026 02:27:18 +0000 (0:00:00.778) 0:00:00.948 ******* 2026-03-25 02:27:20.734823 | orchestrator | changed: [testbed-manager] 2026-03-25 02:27:20.734828 | orchestrator | 2026-03-25 02:27:20.734832 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-25 02:27:20.734836 | orchestrator | Wednesday 25 March 2026 02:27:19 +0000 (0:00:01.311) 0:00:02.260 ******* 2026-03-25 02:27:20.734842 | orchestrator | changed: [testbed-manager] 2026-03-25 02:27:20.734846 | orchestrator | 2026-03-25 02:27:20.734854 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:27:20.734861 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:27:20.734868 | orchestrator | 2026-03-25 02:27:20.734875 | orchestrator | 2026-03-25 02:27:20.734881 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:27:20.734887 | orchestrator | Wednesday 25 March 2026 02:27:20 +0000 (0:00:00.557) 0:00:02.818 ******* 2026-03-25 02:27:20.734893 | orchestrator | =============================================================================== 2026-03-25 02:27:20.734899 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.31s 2026-03-25 02:27:20.734905 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2026-03-25 02:27:20.734912 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.56s 2026-03-25 02:27:21.143982 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-03-25 02:27:33.795090 | orchestrator | 2026-03-25 02:27:33 | INFO  | Task 2306c4f3-5753-4d95-b13f-8826f2641953 (openstackclient) was prepared for execution. 2026-03-25 02:27:33.795178 | orchestrator | 2026-03-25 02:27:33 | INFO  | It takes a moment until task 2306c4f3-5753-4d95-b13f-8826f2641953 (openstackclient) has been started and output is visible here. 2026-03-25 02:28:23.491203 | orchestrator | 2026-03-25 02:28:23.491336 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-25 02:28:23.491354 | orchestrator | 2026-03-25 02:28:23.491365 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-25 02:28:23.491376 | orchestrator | Wednesday 25 March 2026 02:27:38 +0000 (0:00:00.253) 0:00:00.253 ******* 2026-03-25 02:28:23.491387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-25 02:28:23.491399 | orchestrator | 2026-03-25 02:28:23.491435 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-25 02:28:23.491446 | orchestrator | Wednesday 25 March 2026 02:27:38 +0000 (0:00:00.261) 0:00:00.515 ******* 2026-03-25 02:28:23.491479 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-25 02:28:23.491491 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-25 02:28:23.491500 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-25 02:28:23.491510 | orchestrator | 2026-03-25 02:28:23.491520 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-25 02:28:23.491529 | orchestrator | Wednesday 25 March 2026 02:27:40 +0000 (0:00:01.399) 0:00:01.914 ******* 2026-03-25 02:28:23.491539 | orchestrator | changed: [testbed-manager] 2026-03-25 02:28:23.491549 | orchestrator | 2026-03-25 02:28:23.491558 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-25 02:28:23.491568 | orchestrator | Wednesday 25 March 2026 02:27:42 +0000 (0:00:01.675) 0:00:03.590 ******* 2026-03-25 02:28:23.491577 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-25 02:28:23.491588 | orchestrator | ok: [testbed-manager] 2026-03-25 02:28:23.491598 | orchestrator | 2026-03-25 02:28:23.491607 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-25 02:28:23.491617 | orchestrator | Wednesday 25 March 2026 02:28:17 +0000 (0:00:35.927) 0:00:39.517 ******* 2026-03-25 02:28:23.491626 | orchestrator | changed: [testbed-manager] 2026-03-25 02:28:23.491635 | orchestrator | 2026-03-25 02:28:23.491645 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-25 02:28:23.491654 | orchestrator | Wednesday 25 March 2026 02:28:18 +0000 (0:00:00.978) 0:00:40.496 ******* 2026-03-25 02:28:23.491663 | orchestrator | ok: [testbed-manager] 2026-03-25 02:28:23.491673 | orchestrator | 2026-03-25 02:28:23.491682 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-25 02:28:23.491692 | orchestrator | Wednesday 25 March 2026 02:28:19 +0000 (0:00:00.716) 0:00:41.212 ******* 2026-03-25 02:28:23.491701 | orchestrator | changed: [testbed-manager] 2026-03-25 02:28:23.491710 | orchestrator | 2026-03-25 02:28:23.491720 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-25 02:28:23.491730 | orchestrator | Wednesday 25 March 2026 02:28:21 +0000 (0:00:01.474) 0:00:42.687 ******* 2026-03-25 02:28:23.491740 | orchestrator | changed: [testbed-manager] 2026-03-25 02:28:23.491749 | orchestrator | 2026-03-25 02:28:23.491758 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-25 02:28:23.491768 | orchestrator | Wednesday 25 March 2026 02:28:21 +0000 (0:00:00.780) 0:00:43.467 ******* 2026-03-25 02:28:23.491777 | orchestrator | changed: [testbed-manager] 2026-03-25 02:28:23.491787 | orchestrator | 2026-03-25 02:28:23.491796 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-25 02:28:23.491805 | orchestrator | Wednesday 25 March 2026 02:28:22 +0000 (0:00:00.698) 0:00:44.166 ******* 2026-03-25 02:28:23.491814 | orchestrator | ok: [testbed-manager] 2026-03-25 02:28:23.491824 | orchestrator | 2026-03-25 02:28:23.491833 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:28:23.491843 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:28:23.491854 | orchestrator | 2026-03-25 02:28:23.491863 | orchestrator | 2026-03-25 02:28:23.491872 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:28:23.491882 | orchestrator | Wednesday 25 March 2026 02:28:23 +0000 (0:00:00.449) 0:00:44.616 ******* 2026-03-25 02:28:23.491892 | orchestrator | =============================================================================== 2026-03-25 02:28:23.491901 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.93s 2026-03-25 02:28:23.491910 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.68s 2026-03-25 02:28:23.491929 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.47s 2026-03-25 02:28:23.491939 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.40s 2026-03-25 02:28:23.491948 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.98s 2026-03-25 02:28:23.491957 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.78s 2026-03-25 02:28:23.491967 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.72s 2026-03-25 02:28:23.491976 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.70s 2026-03-25 02:28:23.491986 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.45s 2026-03-25 02:28:23.491995 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.26s 2026-03-25 02:28:26.095649 | orchestrator | 2026-03-25 02:28:26 | INFO  | Task f9a938a4-f65b-4c02-a3d0-538b7a5d9473 (common) was prepared for execution. 2026-03-25 02:28:26.095731 | orchestrator | 2026-03-25 02:28:26 | INFO  | It takes a moment until task f9a938a4-f65b-4c02-a3d0-538b7a5d9473 (common) has been started and output is visible here. 2026-03-25 02:28:39.449132 | orchestrator | 2026-03-25 02:28:39.449251 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-25 02:28:39.449260 | orchestrator | 2026-03-25 02:28:39.449267 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-25 02:28:39.449274 | orchestrator | Wednesday 25 March 2026 02:28:30 +0000 (0:00:00.305) 0:00:00.305 ******* 2026-03-25 02:28:39.449281 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:28:39.449289 | orchestrator | 2026-03-25 02:28:39.449304 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-25 02:28:39.450412 | orchestrator | Wednesday 25 March 2026 02:28:32 +0000 (0:00:01.422) 0:00:01.727 ******* 2026-03-25 02:28:39.450471 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 02:28:39.450480 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 02:28:39.450487 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 02:28:39.450495 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 02:28:39.450502 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 02:28:39.450509 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 02:28:39.450516 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 02:28:39.450523 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 02:28:39.450559 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 02:28:39.450567 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 02:28:39.450575 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 02:28:39.450582 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 02:28:39.450590 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 02:28:39.450596 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 02:28:39.450603 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 02:28:39.450610 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 02:28:39.450617 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 02:28:39.450644 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 02:28:39.450652 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 02:28:39.450659 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 02:28:39.450666 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 02:28:39.450673 | orchestrator | 2026-03-25 02:28:39.450680 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-25 02:28:39.450687 | orchestrator | Wednesday 25 March 2026 02:28:35 +0000 (0:00:02.771) 0:00:04.499 ******* 2026-03-25 02:28:39.450694 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:28:39.450703 | orchestrator | 2026-03-25 02:28:39.450710 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-25 02:28:39.450720 | orchestrator | Wednesday 25 March 2026 02:28:36 +0000 (0:00:01.533) 0:00:06.033 ******* 2026-03-25 02:28:39.450731 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:39.450740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:39.450771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:39.450779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:39.450787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:39.450794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:39.450806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:39.450814 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:39.450821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:39.450839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517819 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517850 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517856 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:40.517875 | orchestrator | 2026-03-25 02:28:40.517883 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-25 02:28:40.517889 | orchestrator | Wednesday 25 March 2026 02:28:40 +0000 (0:00:03.591) 0:00:09.624 ******* 2026-03-25 02:28:40.517897 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:40.517904 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:40.517910 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:40.517916 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:28:40.517923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:40.517938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256642 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:28:41.256682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:41.256688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256697 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:28:41.256701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:41.256708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256716 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:28:41.256733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:41.256741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256749 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:28:41.256753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:41.256757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:41.256765 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:28:41.256769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:41.256776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158166 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:28:42.158175 | orchestrator | 2026-03-25 02:28:42.158181 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-25 02:28:42.158189 | orchestrator | Wednesday 25 March 2026 02:28:41 +0000 (0:00:01.089) 0:00:10.713 ******* 2026-03-25 02:28:42.158197 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:42.158208 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158215 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158221 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:28:42.158244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:42.158256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158288 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:28:42.158336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:42.158346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158356 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:28:42.158360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:42.158364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:42.158379 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:28:42.158383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:42.158398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:47.610186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:47.610315 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:28:47.610342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:47.610363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:47.610381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:47.610397 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:28:47.610413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 02:28:47.610499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:47.610520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:28:47.610538 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:28:47.610555 | orchestrator | 2026-03-25 02:28:47.610573 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-25 02:28:47.610591 | orchestrator | Wednesday 25 March 2026 02:28:43 +0000 (0:00:01.903) 0:00:12.617 ******* 2026-03-25 02:28:47.610608 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:28:47.610623 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:28:47.610640 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:28:47.610657 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:28:47.610697 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:28:47.610716 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:28:47.610733 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:28:47.610750 | orchestrator | 2026-03-25 02:28:47.610768 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-25 02:28:47.610786 | orchestrator | Wednesday 25 March 2026 02:28:43 +0000 (0:00:00.756) 0:00:13.374 ******* 2026-03-25 02:28:47.610804 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:28:47.610823 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:28:47.610842 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:28:47.610858 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:28:47.610875 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:28:47.610892 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:28:47.610909 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:28:47.610926 | orchestrator | 2026-03-25 02:28:47.610943 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-25 02:28:47.610961 | orchestrator | Wednesday 25 March 2026 02:28:44 +0000 (0:00:00.982) 0:00:14.356 ******* 2026-03-25 02:28:47.610980 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:47.611020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:47.611052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:47.611076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:47.611094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:47.611111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:47.611150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:28:50.502964 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503102 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503107 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503139 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:28:50.503175 | orchestrator | 2026-03-25 02:28:50.503179 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-25 02:28:50.503185 | orchestrator | Wednesday 25 March 2026 02:28:48 +0000 (0:00:03.479) 0:00:17.836 ******* 2026-03-25 02:28:50.503189 | orchestrator | [WARNING]: Skipped 2026-03-25 02:28:50.503193 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-25 02:28:50.503199 | orchestrator | to this access issue: 2026-03-25 02:28:50.503203 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-25 02:28:50.503207 | orchestrator | directory 2026-03-25 02:28:50.503211 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 02:28:50.503215 | orchestrator | 2026-03-25 02:28:50.503219 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-25 02:28:50.503223 | orchestrator | Wednesday 25 March 2026 02:28:49 +0000 (0:00:01.065) 0:00:18.901 ******* 2026-03-25 02:28:50.503227 | orchestrator | [WARNING]: Skipped 2026-03-25 02:28:50.503233 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-25 02:29:01.006767 | orchestrator | to this access issue: 2026-03-25 02:29:01.006928 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-25 02:29:01.006947 | orchestrator | directory 2026-03-25 02:29:01.006959 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 02:29:01.006973 | orchestrator | 2026-03-25 02:29:01.007033 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-25 02:29:01.007048 | orchestrator | Wednesday 25 March 2026 02:28:50 +0000 (0:00:01.364) 0:00:20.266 ******* 2026-03-25 02:29:01.007085 | orchestrator | [WARNING]: Skipped 2026-03-25 02:29:01.007097 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-25 02:29:01.007108 | orchestrator | to this access issue: 2026-03-25 02:29:01.007120 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-25 02:29:01.007138 | orchestrator | directory 2026-03-25 02:29:01.007156 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 02:29:01.007175 | orchestrator | 2026-03-25 02:29:01.007193 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-25 02:29:01.007212 | orchestrator | Wednesday 25 March 2026 02:28:51 +0000 (0:00:00.904) 0:00:21.170 ******* 2026-03-25 02:29:01.007231 | orchestrator | [WARNING]: Skipped 2026-03-25 02:29:01.007249 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-25 02:29:01.007262 | orchestrator | to this access issue: 2026-03-25 02:29:01.007273 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-25 02:29:01.007284 | orchestrator | directory 2026-03-25 02:29:01.007297 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 02:29:01.007309 | orchestrator | 2026-03-25 02:29:01.007322 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-25 02:29:01.007334 | orchestrator | Wednesday 25 March 2026 02:28:52 +0000 (0:00:00.907) 0:00:22.078 ******* 2026-03-25 02:29:01.007347 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:29:01.007360 | orchestrator | changed: [testbed-manager] 2026-03-25 02:29:01.007373 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:29:01.007385 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:29:01.007397 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:29:01.007409 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:29:01.007443 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:29:01.007489 | orchestrator | 2026-03-25 02:29:01.007508 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-25 02:29:01.007526 | orchestrator | Wednesday 25 March 2026 02:28:55 +0000 (0:00:02.674) 0:00:24.753 ******* 2026-03-25 02:29:01.007545 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 02:29:01.007565 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 02:29:01.007582 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 02:29:01.007600 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 02:29:01.007620 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 02:29:01.007639 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 02:29:01.007666 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 02:29:01.007686 | orchestrator | 2026-03-25 02:29:01.007705 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-25 02:29:01.007724 | orchestrator | Wednesday 25 March 2026 02:28:57 +0000 (0:00:02.187) 0:00:26.940 ******* 2026-03-25 02:29:01.007743 | orchestrator | changed: [testbed-manager] 2026-03-25 02:29:01.007761 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:29:01.007779 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:29:01.007799 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:29:01.007817 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:29:01.007836 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:29:01.007856 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:29:01.007875 | orchestrator | 2026-03-25 02:29:01.007893 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-25 02:29:01.007921 | orchestrator | Wednesday 25 March 2026 02:28:59 +0000 (0:00:01.904) 0:00:28.845 ******* 2026-03-25 02:29:01.007935 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:01.007972 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:29:01.007985 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:01.007997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:29:01.008008 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:01.008019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:29:01.008037 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:01.008056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:29:01.008077 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:01.008098 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:06.764677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:29:06.764781 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:06.764797 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:06.764820 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:06.764829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:29:06.764857 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:06.764864 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:06.764887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:29:06.764894 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:06.764901 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:06.764908 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:06.764915 | orchestrator | 2026-03-25 02:29:06.764923 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-25 02:29:06.764932 | orchestrator | Wednesday 25 March 2026 02:29:00 +0000 (0:00:01.617) 0:00:30.462 ******* 2026-03-25 02:29:06.764940 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 02:29:06.764947 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 02:29:06.764960 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 02:29:06.764967 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 02:29:06.764974 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 02:29:06.764981 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 02:29:06.764988 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 02:29:06.764994 | orchestrator | 2026-03-25 02:29:06.765001 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-25 02:29:06.765008 | orchestrator | Wednesday 25 March 2026 02:29:02 +0000 (0:00:01.994) 0:00:32.457 ******* 2026-03-25 02:29:06.765012 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 02:29:06.765017 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 02:29:06.765021 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 02:29:06.765031 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 02:29:06.765035 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 02:29:06.765039 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 02:29:06.765043 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 02:29:06.765047 | orchestrator | 2026-03-25 02:29:06.765051 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-25 02:29:06.765055 | orchestrator | Wednesday 25 March 2026 02:29:04 +0000 (0:00:01.742) 0:00:34.199 ******* 2026-03-25 02:29:06.765060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:06.765077 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:07.362732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:07.362838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:07.362869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:07.362888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:07.362896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 02:29:07.362903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:07.362911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:07.362933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:07.362941 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:07.362954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:07.362965 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:07.362972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:07.362980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:07.362989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:29:07.363002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:30:31.709683 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:30:31.709784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:30:31.709791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:30:31.709805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:30:31.709809 | orchestrator | 2026-03-25 02:30:31.709815 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-25 02:30:31.709820 | orchestrator | Wednesday 25 March 2026 02:29:07 +0000 (0:00:02.616) 0:00:36.816 ******* 2026-03-25 02:30:31.709824 | orchestrator | changed: [testbed-manager] 2026-03-25 02:30:31.709829 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:30:31.709833 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:30:31.709836 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:30:31.709841 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:30:31.709847 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:30:31.709853 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:30:31.709858 | orchestrator | 2026-03-25 02:30:31.709865 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-25 02:30:31.709870 | orchestrator | Wednesday 25 March 2026 02:29:08 +0000 (0:00:01.539) 0:00:38.355 ******* 2026-03-25 02:30:31.709876 | orchestrator | changed: [testbed-manager] 2026-03-25 02:30:31.709881 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:30:31.709887 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:30:31.709893 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:30:31.709898 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:30:31.709904 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:30:31.709909 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:30:31.709915 | orchestrator | 2026-03-25 02:30:31.709920 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 02:30:31.709926 | orchestrator | Wednesday 25 March 2026 02:29:10 +0000 (0:00:01.163) 0:00:39.519 ******* 2026-03-25 02:30:31.709931 | orchestrator | 2026-03-25 02:30:31.709937 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 02:30:31.709943 | orchestrator | Wednesday 25 March 2026 02:29:10 +0000 (0:00:00.068) 0:00:39.587 ******* 2026-03-25 02:30:31.709949 | orchestrator | 2026-03-25 02:30:31.709954 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 02:30:31.709960 | orchestrator | Wednesday 25 March 2026 02:29:10 +0000 (0:00:00.068) 0:00:39.656 ******* 2026-03-25 02:30:31.709967 | orchestrator | 2026-03-25 02:30:31.709972 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 02:30:31.709978 | orchestrator | Wednesday 25 March 2026 02:29:10 +0000 (0:00:00.086) 0:00:39.743 ******* 2026-03-25 02:30:31.709984 | orchestrator | 2026-03-25 02:30:31.709990 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 02:30:31.710004 | orchestrator | Wednesday 25 March 2026 02:29:10 +0000 (0:00:00.245) 0:00:39.988 ******* 2026-03-25 02:30:31.710011 | orchestrator | 2026-03-25 02:30:31.710072 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 02:30:31.710079 | orchestrator | Wednesday 25 March 2026 02:29:10 +0000 (0:00:00.065) 0:00:40.054 ******* 2026-03-25 02:30:31.710085 | orchestrator | 2026-03-25 02:30:31.710091 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 02:30:31.710097 | orchestrator | Wednesday 25 March 2026 02:29:10 +0000 (0:00:00.073) 0:00:40.127 ******* 2026-03-25 02:30:31.710103 | orchestrator | 2026-03-25 02:30:31.710109 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-25 02:30:31.710116 | orchestrator | Wednesday 25 March 2026 02:29:10 +0000 (0:00:00.112) 0:00:40.239 ******* 2026-03-25 02:30:31.710120 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:30:31.710124 | orchestrator | changed: [testbed-manager] 2026-03-25 02:30:31.710130 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:30:31.710135 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:30:31.710145 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:30:31.710166 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:30:31.710173 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:30:31.710179 | orchestrator | 2026-03-25 02:30:31.710186 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-25 02:30:31.710192 | orchestrator | Wednesday 25 March 2026 02:29:49 +0000 (0:00:38.430) 0:01:18.670 ******* 2026-03-25 02:30:31.710198 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:30:31.710204 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:30:31.710208 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:30:31.710212 | orchestrator | changed: [testbed-manager] 2026-03-25 02:30:31.710216 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:30:31.710219 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:30:31.710223 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:30:31.710227 | orchestrator | 2026-03-25 02:30:31.710230 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-25 02:30:31.710234 | orchestrator | Wednesday 25 March 2026 02:30:20 +0000 (0:00:31.691) 0:01:50.361 ******* 2026-03-25 02:30:31.710238 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:30:31.710244 | orchestrator | ok: [testbed-manager] 2026-03-25 02:30:31.710248 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:30:31.710252 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:30:31.710257 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:30:31.710261 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:30:31.710265 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:30:31.710269 | orchestrator | 2026-03-25 02:30:31.710274 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-25 02:30:31.710278 | orchestrator | Wednesday 25 March 2026 02:30:22 +0000 (0:00:01.905) 0:01:52.267 ******* 2026-03-25 02:30:31.710282 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:30:31.710287 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:30:31.710291 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:30:31.710295 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:30:31.710300 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:30:31.710304 | orchestrator | changed: [testbed-manager] 2026-03-25 02:30:31.710308 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:30:31.710312 | orchestrator | 2026-03-25 02:30:31.710332 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:30:31.710337 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 02:30:31.710344 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 02:30:31.710357 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 02:30:31.710367 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 02:30:31.710371 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 02:30:31.710376 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 02:30:31.710380 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 02:30:31.710384 | orchestrator | 2026-03-25 02:30:31.710389 | orchestrator | 2026-03-25 02:30:31.710393 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:30:31.710397 | orchestrator | Wednesday 25 March 2026 02:30:31 +0000 (0:00:08.881) 0:02:01.148 ******* 2026-03-25 02:30:31.710401 | orchestrator | =============================================================================== 2026-03-25 02:30:31.710406 | orchestrator | common : Restart fluentd container ------------------------------------- 38.43s 2026-03-25 02:30:31.710410 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.69s 2026-03-25 02:30:31.710414 | orchestrator | common : Restart cron container ----------------------------------------- 8.88s 2026-03-25 02:30:31.710419 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.59s 2026-03-25 02:30:31.710423 | orchestrator | common : Copying over config.json files for services -------------------- 3.48s 2026-03-25 02:30:31.710427 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.77s 2026-03-25 02:30:31.710431 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.67s 2026-03-25 02:30:31.710436 | orchestrator | common : Check common containers ---------------------------------------- 2.62s 2026-03-25 02:30:31.710440 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.19s 2026-03-25 02:30:31.710460 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.99s 2026-03-25 02:30:31.710464 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.91s 2026-03-25 02:30:31.710468 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.90s 2026-03-25 02:30:31.710472 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.90s 2026-03-25 02:30:31.710476 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.74s 2026-03-25 02:30:31.710479 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.62s 2026-03-25 02:30:31.710483 | orchestrator | common : Creating log volume -------------------------------------------- 1.54s 2026-03-25 02:30:31.710492 | orchestrator | common : include_tasks -------------------------------------------------- 1.53s 2026-03-25 02:30:32.190118 | orchestrator | common : include_tasks -------------------------------------------------- 1.42s 2026-03-25 02:30:32.190233 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.36s 2026-03-25 02:30:32.190254 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.16s 2026-03-25 02:30:34.797331 | orchestrator | 2026-03-25 02:30:34 | INFO  | Task d9cfab4f-9b88-4b1e-96b2-7ae3e9b7875d (loadbalancer) was prepared for execution. 2026-03-25 02:30:34.797408 | orchestrator | 2026-03-25 02:30:34 | INFO  | It takes a moment until task d9cfab4f-9b88-4b1e-96b2-7ae3e9b7875d (loadbalancer) has been started and output is visible here. 2026-03-25 02:30:49.366170 | orchestrator | 2026-03-25 02:30:49.366301 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 02:30:49.366323 | orchestrator | 2026-03-25 02:30:49.366339 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 02:30:49.366356 | orchestrator | Wednesday 25 March 2026 02:30:39 +0000 (0:00:00.268) 0:00:00.268 ******* 2026-03-25 02:30:49.366400 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:30:49.366413 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:30:49.366422 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:30:49.366430 | orchestrator | 2026-03-25 02:30:49.366439 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 02:30:49.366472 | orchestrator | Wednesday 25 March 2026 02:30:39 +0000 (0:00:00.353) 0:00:00.621 ******* 2026-03-25 02:30:49.366482 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-25 02:30:49.366490 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-25 02:30:49.366499 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-25 02:30:49.366507 | orchestrator | 2026-03-25 02:30:49.366516 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-25 02:30:49.366524 | orchestrator | 2026-03-25 02:30:49.366533 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-25 02:30:49.366542 | orchestrator | Wednesday 25 March 2026 02:30:40 +0000 (0:00:00.480) 0:00:01.102 ******* 2026-03-25 02:30:49.366564 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:30:49.366574 | orchestrator | 2026-03-25 02:30:49.366582 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-25 02:30:49.366591 | orchestrator | Wednesday 25 March 2026 02:30:40 +0000 (0:00:00.595) 0:00:01.697 ******* 2026-03-25 02:30:49.366599 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:30:49.366608 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:30:49.366616 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:30:49.366625 | orchestrator | 2026-03-25 02:30:49.366635 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-25 02:30:49.366645 | orchestrator | Wednesday 25 March 2026 02:30:41 +0000 (0:00:00.623) 0:00:02.320 ******* 2026-03-25 02:30:49.366655 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:30:49.366664 | orchestrator | 2026-03-25 02:30:49.366674 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-25 02:30:49.366684 | orchestrator | Wednesday 25 March 2026 02:30:42 +0000 (0:00:00.777) 0:00:03.098 ******* 2026-03-25 02:30:49.366694 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:30:49.366704 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:30:49.366713 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:30:49.366722 | orchestrator | 2026-03-25 02:30:49.366730 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-25 02:30:49.366739 | orchestrator | Wednesday 25 March 2026 02:30:42 +0000 (0:00:00.619) 0:00:03.718 ******* 2026-03-25 02:30:49.366747 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-25 02:30:49.366756 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-25 02:30:49.366765 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-25 02:30:49.366773 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-25 02:30:49.366781 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-25 02:30:49.366790 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-25 02:30:49.366798 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-25 02:30:49.366808 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-25 02:30:49.366816 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-25 02:30:49.366825 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-25 02:30:49.366843 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-25 02:30:49.366852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-25 02:30:49.366860 | orchestrator | 2026-03-25 02:30:49.366869 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-25 02:30:49.366877 | orchestrator | Wednesday 25 March 2026 02:30:45 +0000 (0:00:02.124) 0:00:05.842 ******* 2026-03-25 02:30:49.366886 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-25 02:30:49.366895 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-25 02:30:49.366904 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-25 02:30:49.366913 | orchestrator | 2026-03-25 02:30:49.366921 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-25 02:30:49.366930 | orchestrator | Wednesday 25 March 2026 02:30:45 +0000 (0:00:00.705) 0:00:06.548 ******* 2026-03-25 02:30:49.366938 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-25 02:30:49.366947 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-25 02:30:49.366956 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-25 02:30:49.366964 | orchestrator | 2026-03-25 02:30:49.366973 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-25 02:30:49.366981 | orchestrator | Wednesday 25 March 2026 02:30:46 +0000 (0:00:01.254) 0:00:07.802 ******* 2026-03-25 02:30:49.366990 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-25 02:30:49.366998 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:30:49.367024 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-25 02:30:49.367034 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:30:49.367042 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-25 02:30:49.367051 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:30:49.367059 | orchestrator | 2026-03-25 02:30:49.367067 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-25 02:30:49.367076 | orchestrator | Wednesday 25 March 2026 02:30:47 +0000 (0:00:00.547) 0:00:08.349 ******* 2026-03-25 02:30:49.367087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 02:30:49.367108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 02:30:49.367118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 02:30:49.367133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:30:49.367142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:30:49.367158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:30:54.756736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:30:54.756830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:30:54.756838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:30:54.756843 | orchestrator | 2026-03-25 02:30:54.756848 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-25 02:30:54.756854 | orchestrator | Wednesday 25 March 2026 02:30:49 +0000 (0:00:01.827) 0:00:10.177 ******* 2026-03-25 02:30:54.756858 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:30:54.756878 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:30:54.756882 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:30:54.756886 | orchestrator | 2026-03-25 02:30:54.756890 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-25 02:30:54.756894 | orchestrator | Wednesday 25 March 2026 02:30:50 +0000 (0:00:00.913) 0:00:11.091 ******* 2026-03-25 02:30:54.756899 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-25 02:30:54.756903 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-25 02:30:54.756909 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-25 02:30:54.756915 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-25 02:30:54.756920 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-25 02:30:54.756925 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-25 02:30:54.756931 | orchestrator | 2026-03-25 02:30:54.756936 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-25 02:30:54.756942 | orchestrator | Wednesday 25 March 2026 02:30:51 +0000 (0:00:01.478) 0:00:12.569 ******* 2026-03-25 02:30:54.756948 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:30:54.756954 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:30:54.756959 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:30:54.756965 | orchestrator | 2026-03-25 02:30:54.756970 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-25 02:30:54.756975 | orchestrator | Wednesday 25 March 2026 02:30:52 +0000 (0:00:00.913) 0:00:13.483 ******* 2026-03-25 02:30:54.756981 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:30:54.756987 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:30:54.756993 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:30:54.757000 | orchestrator | 2026-03-25 02:30:54.757006 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-25 02:30:54.757012 | orchestrator | Wednesday 25 March 2026 02:30:54 +0000 (0:00:01.396) 0:00:14.879 ******* 2026-03-25 02:30:54.757019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 02:30:54.757043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:30:54.757050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:30:54.757057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 02:30:54.757069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 02:30:54.757076 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:30:54.757117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:30:54.757124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:30:54.757130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 02:30:54.757137 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:30:54.757148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 02:30:57.641173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:30:57.641332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:30:57.641365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 02:30:57.641382 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:30:57.641399 | orchestrator | 2026-03-25 02:30:57.641415 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-25 02:30:57.641431 | orchestrator | Wednesday 25 March 2026 02:30:54 +0000 (0:00:00.699) 0:00:15.579 ******* 2026-03-25 02:30:57.641526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 02:30:57.641545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 02:30:57.641560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 02:30:57.641638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:30:57.641656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:30:57.641681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 02:30:57.641695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:30:57.641709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:30:57.641723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 02:30:57.641771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:06.169964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:06.170104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc', '__omit_place_holder__b60b1eccf422e1d2312a6af348dd8d9f1b0131dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 02:31:06.170114 | orchestrator | 2026-03-25 02:31:06.170122 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-25 02:31:06.170128 | orchestrator | Wednesday 25 March 2026 02:30:57 +0000 (0:00:02.878) 0:00:18.457 ******* 2026-03-25 02:31:06.170135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 02:31:06.170141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 02:31:06.170147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 02:31:06.170171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:06.170200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:06.170207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:06.170213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:31:06.170219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:31:06.170224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:31:06.170230 | orchestrator | 2026-03-25 02:31:06.170235 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-25 02:31:06.170241 | orchestrator | Wednesday 25 March 2026 02:31:00 +0000 (0:00:03.062) 0:00:21.520 ******* 2026-03-25 02:31:06.170254 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-25 02:31:06.170260 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-25 02:31:06.170266 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-25 02:31:06.170271 | orchestrator | 2026-03-25 02:31:06.170277 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-25 02:31:06.170282 | orchestrator | Wednesday 25 March 2026 02:31:02 +0000 (0:00:01.892) 0:00:23.412 ******* 2026-03-25 02:31:06.170288 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-25 02:31:06.170293 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-25 02:31:06.170299 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-25 02:31:06.170304 | orchestrator | 2026-03-25 02:31:06.170309 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-25 02:31:06.170315 | orchestrator | Wednesday 25 March 2026 02:31:05 +0000 (0:00:02.973) 0:00:26.385 ******* 2026-03-25 02:31:06.170320 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:06.170327 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:06.170333 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:06.170338 | orchestrator | 2026-03-25 02:31:06.170348 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-25 02:31:17.827155 | orchestrator | Wednesday 25 March 2026 02:31:06 +0000 (0:00:00.604) 0:00:26.990 ******* 2026-03-25 02:31:17.827261 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-25 02:31:17.827287 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-25 02:31:17.827296 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-25 02:31:17.827306 | orchestrator | 2026-03-25 02:31:17.827315 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-25 02:31:17.827325 | orchestrator | Wednesday 25 March 2026 02:31:08 +0000 (0:00:02.147) 0:00:29.137 ******* 2026-03-25 02:31:17.827334 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-25 02:31:17.827343 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-25 02:31:17.827352 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-25 02:31:17.827361 | orchestrator | 2026-03-25 02:31:17.827369 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-25 02:31:17.827378 | orchestrator | Wednesday 25 March 2026 02:31:10 +0000 (0:00:02.163) 0:00:31.301 ******* 2026-03-25 02:31:17.827388 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-25 02:31:17.827397 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-25 02:31:17.827405 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-25 02:31:17.827414 | orchestrator | 2026-03-25 02:31:17.827434 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-25 02:31:17.827497 | orchestrator | Wednesday 25 March 2026 02:31:11 +0000 (0:00:01.447) 0:00:32.749 ******* 2026-03-25 02:31:17.827514 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-25 02:31:17.827528 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-25 02:31:17.827542 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-25 02:31:17.827556 | orchestrator | 2026-03-25 02:31:17.827597 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-25 02:31:17.827613 | orchestrator | Wednesday 25 March 2026 02:31:13 +0000 (0:00:01.418) 0:00:34.167 ******* 2026-03-25 02:31:17.827628 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:31:17.827642 | orchestrator | 2026-03-25 02:31:17.827656 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-25 02:31:17.827671 | orchestrator | Wednesday 25 March 2026 02:31:13 +0000 (0:00:00.605) 0:00:34.772 ******* 2026-03-25 02:31:17.827687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 02:31:17.827707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 02:31:17.827731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 02:31:17.827771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:17.827790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:17.827806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:17.827835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:31:17.827852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:31:17.827867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:31:17.827882 | orchestrator | 2026-03-25 02:31:17.827897 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-25 02:31:17.827912 | orchestrator | Wednesday 25 March 2026 02:31:17 +0000 (0:00:03.234) 0:00:38.006 ******* 2026-03-25 02:31:17.827945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 02:31:18.654185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:18.654313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:18.654371 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:18.654392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 02:31:18.654408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:18.654423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:18.654437 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:18.654513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 02:31:18.654574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:18.654594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:18.654623 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:18.654638 | orchestrator | 2026-03-25 02:31:18.654654 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-25 02:31:18.654670 | orchestrator | Wednesday 25 March 2026 02:31:17 +0000 (0:00:00.642) 0:00:38.649 ******* 2026-03-25 02:31:18.654685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 02:31:18.654701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:18.654716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:18.654731 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:18.654745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 02:31:18.654778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:19.596153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:19.596303 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:19.596346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 02:31:19.596369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:19.596390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:19.596407 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:19.596426 | orchestrator | 2026-03-25 02:31:19.596514 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-25 02:31:19.596538 | orchestrator | Wednesday 25 March 2026 02:31:18 +0000 (0:00:00.828) 0:00:39.478 ******* 2026-03-25 02:31:19.596558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 02:31:19.596579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:19.596624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:19.596649 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:19.596663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 02:31:19.596676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:19.596689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:19.596702 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:19.596714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 02:31:19.596746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:19.596765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:19.596793 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:21.093708 | orchestrator | 2026-03-25 02:31:21.093823 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-25 02:31:21.093847 | orchestrator | Wednesday 25 March 2026 02:31:19 +0000 (0:00:00.930) 0:00:40.408 ******* 2026-03-25 02:31:21.093866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 02:31:21.093884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:21.093900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:21.093916 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:21.093931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 02:31:21.093947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:21.093990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:21.094106 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:21.094154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 02:31:21.094171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:21.094186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:21.094200 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:21.094214 | orchestrator | 2026-03-25 02:31:21.094229 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-25 02:31:21.094244 | orchestrator | Wednesday 25 March 2026 02:31:20 +0000 (0:00:00.630) 0:00:41.038 ******* 2026-03-25 02:31:21.094261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 02:31:21.094278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:21.094311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:21.094328 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:21.094366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 02:31:22.215713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:22.215845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:22.215874 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:22.215896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 02:31:22.215915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:22.215933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:22.215986 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:22.216007 | orchestrator | 2026-03-25 02:31:22.216026 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-25 02:31:22.216044 | orchestrator | Wednesday 25 March 2026 02:31:21 +0000 (0:00:00.878) 0:00:41.917 ******* 2026-03-25 02:31:22.216078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 02:31:22.216124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:22.216144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:22.216160 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:22.216178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 02:31:22.216196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:22.216229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:22.216271 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:22.216313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 02:31:22.216348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:23.635224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:23.635356 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:23.635375 | orchestrator | 2026-03-25 02:31:23.635389 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-25 02:31:23.635402 | orchestrator | Wednesday 25 March 2026 02:31:22 +0000 (0:00:01.116) 0:00:43.034 ******* 2026-03-25 02:31:23.635415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 02:31:23.635429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:23.635502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:23.635516 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:23.635529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 02:31:23.635604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:23.635642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:23.635656 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:23.635669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 02:31:23.635682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:23.635705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:23.635726 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:23.635746 | orchestrator | 2026-03-25 02:31:23.635766 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-25 02:31:23.635785 | orchestrator | Wednesday 25 March 2026 02:31:22 +0000 (0:00:00.620) 0:00:43.654 ******* 2026-03-25 02:31:23.635807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 02:31:23.635830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:23.635871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:30.248614 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:30.248747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 02:31:30.248772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:30.248814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:30.248828 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:30.248842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 02:31:30.248854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 02:31:30.248884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 02:31:30.248898 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:30.248911 | orchestrator | 2026-03-25 02:31:30.248924 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-25 02:31:30.248937 | orchestrator | Wednesday 25 March 2026 02:31:23 +0000 (0:00:00.802) 0:00:44.456 ******* 2026-03-25 02:31:30.248949 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-25 02:31:30.248984 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-25 02:31:30.248997 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-25 02:31:30.249008 | orchestrator | 2026-03-25 02:31:30.249019 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-25 02:31:30.249030 | orchestrator | Wednesday 25 March 2026 02:31:25 +0000 (0:00:01.742) 0:00:46.199 ******* 2026-03-25 02:31:30.249043 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-25 02:31:30.249055 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-25 02:31:30.249066 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-25 02:31:30.249077 | orchestrator | 2026-03-25 02:31:30.249102 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-25 02:31:30.249114 | orchestrator | Wednesday 25 March 2026 02:31:27 +0000 (0:00:01.701) 0:00:47.901 ******* 2026-03-25 02:31:30.249126 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-25 02:31:30.249138 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-25 02:31:30.249150 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-25 02:31:30.249162 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-25 02:31:30.249174 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:30.249187 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-25 02:31:30.249199 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:30.249212 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-25 02:31:30.249223 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:30.249236 | orchestrator | 2026-03-25 02:31:30.249247 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-25 02:31:30.249259 | orchestrator | Wednesday 25 March 2026 02:31:27 +0000 (0:00:00.862) 0:00:48.764 ******* 2026-03-25 02:31:30.249273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 02:31:30.249287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 02:31:30.249307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 02:31:30.249332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:34.700081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:34.700182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 02:31:34.700196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:31:34.700207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:31:34.700216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 02:31:34.700226 | orchestrator | 2026-03-25 02:31:34.700236 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-25 02:31:34.700261 | orchestrator | Wednesday 25 March 2026 02:31:30 +0000 (0:00:02.305) 0:00:51.069 ******* 2026-03-25 02:31:34.700271 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:31:34.700280 | orchestrator | 2026-03-25 02:31:34.700289 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-25 02:31:34.700297 | orchestrator | Wednesday 25 March 2026 02:31:31 +0000 (0:00:00.905) 0:00:51.974 ******* 2026-03-25 02:31:34.700324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 02:31:34.700356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 02:31:34.700370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:34.700385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 02:31:34.700400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 02:31:34.700421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 02:31:34.700437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:34.700543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 02:31:35.345358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 02:31:35.345501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 02:31:35.345515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:35.345537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 02:31:35.345545 | orchestrator | 2026-03-25 02:31:35.345553 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-25 02:31:35.345561 | orchestrator | Wednesday 25 March 2026 02:31:34 +0000 (0:00:03.539) 0:00:55.514 ******* 2026-03-25 02:31:35.345569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 02:31:35.345633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 02:31:35.345648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:35.345659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 02:31:35.345670 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:35.345682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 02:31:35.345699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 02:31:35.345717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:35.345729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 02:31:35.345736 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:35.345751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 02:31:44.495852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 02:31:44.495957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:44.495968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 02:31:44.496000 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:44.496009 | orchestrator | 2026-03-25 02:31:44.496018 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-25 02:31:44.496027 | orchestrator | Wednesday 25 March 2026 02:31:35 +0000 (0:00:00.650) 0:00:56.164 ******* 2026-03-25 02:31:44.496035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-25 02:31:44.496045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-25 02:31:44.496052 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:44.496076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-25 02:31:44.496089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-25 02:31:44.496098 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:44.496107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-25 02:31:44.496116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-25 02:31:44.496124 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:44.496132 | orchestrator | 2026-03-25 02:31:44.496139 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-25 02:31:44.496146 | orchestrator | Wednesday 25 March 2026 02:31:36 +0000 (0:00:01.158) 0:00:57.323 ******* 2026-03-25 02:31:44.496153 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:31:44.496159 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:31:44.496165 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:31:44.496172 | orchestrator | 2026-03-25 02:31:44.496179 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-25 02:31:44.496186 | orchestrator | Wednesday 25 March 2026 02:31:37 +0000 (0:00:01.320) 0:00:58.644 ******* 2026-03-25 02:31:44.496192 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:31:44.496199 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:31:44.496205 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:31:44.496212 | orchestrator | 2026-03-25 02:31:44.496218 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-25 02:31:44.496224 | orchestrator | Wednesday 25 March 2026 02:31:39 +0000 (0:00:02.170) 0:01:00.815 ******* 2026-03-25 02:31:44.496230 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:31:44.496235 | orchestrator | 2026-03-25 02:31:44.496260 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-25 02:31:44.496266 | orchestrator | Wednesday 25 March 2026 02:31:40 +0000 (0:00:00.729) 0:01:01.544 ******* 2026-03-25 02:31:44.496276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 02:31:44.496294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:44.496307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:31:44.496314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 02:31:44.496320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:44.496334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:31:45.204253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 02:31:45.204386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:45.204405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:31:45.204415 | orchestrator | 2026-03-25 02:31:45.204425 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-25 02:31:45.204434 | orchestrator | Wednesday 25 March 2026 02:31:44 +0000 (0:00:03.769) 0:01:05.313 ******* 2026-03-25 02:31:45.204480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 02:31:45.204489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:45.204539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:31:45.204548 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:45.204562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 02:31:45.204574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:45.204587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:31:45.204601 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:45.204621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 02:31:45.204671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 02:31:55.196044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:31:55.196144 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:55.196156 | orchestrator | 2026-03-25 02:31:55.196165 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-25 02:31:55.196173 | orchestrator | Wednesday 25 March 2026 02:31:45 +0000 (0:00:00.712) 0:01:06.026 ******* 2026-03-25 02:31:55.196194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-25 02:31:55.196204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-25 02:31:55.196212 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:55.196219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-25 02:31:55.196226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-25 02:31:55.196233 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:55.196240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-25 02:31:55.196247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-25 02:31:55.196254 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:55.196260 | orchestrator | 2026-03-25 02:31:55.196267 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-25 02:31:55.196274 | orchestrator | Wednesday 25 March 2026 02:31:46 +0000 (0:00:00.875) 0:01:06.901 ******* 2026-03-25 02:31:55.196281 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:31:55.196288 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:31:55.196295 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:31:55.196301 | orchestrator | 2026-03-25 02:31:55.196308 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-25 02:31:55.196315 | orchestrator | Wednesday 25 March 2026 02:31:47 +0000 (0:00:01.568) 0:01:08.470 ******* 2026-03-25 02:31:55.196342 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:31:55.196349 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:31:55.196356 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:31:55.196362 | orchestrator | 2026-03-25 02:31:55.196369 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-25 02:31:55.196376 | orchestrator | Wednesday 25 March 2026 02:31:49 +0000 (0:00:02.067) 0:01:10.537 ******* 2026-03-25 02:31:55.196382 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:55.196389 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:55.196395 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:31:55.196402 | orchestrator | 2026-03-25 02:31:55.196408 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-25 02:31:55.196415 | orchestrator | Wednesday 25 March 2026 02:31:50 +0000 (0:00:00.342) 0:01:10.880 ******* 2026-03-25 02:31:55.196421 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:31:55.196428 | orchestrator | 2026-03-25 02:31:55.196435 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-25 02:31:55.196481 | orchestrator | Wednesday 25 March 2026 02:31:50 +0000 (0:00:00.767) 0:01:11.647 ******* 2026-03-25 02:31:55.196514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-25 02:31:55.196535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-25 02:31:55.196543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-25 02:31:55.196550 | orchestrator | 2026-03-25 02:31:55.196557 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-25 02:31:55.196566 | orchestrator | Wednesday 25 March 2026 02:31:53 +0000 (0:00:02.912) 0:01:14.560 ******* 2026-03-25 02:31:55.196581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-25 02:31:55.196589 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:31:55.196598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-25 02:31:55.196606 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:31:55.196619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-25 02:32:03.776186 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:03.776306 | orchestrator | 2026-03-25 02:32:03.776328 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-25 02:32:03.776345 | orchestrator | Wednesday 25 March 2026 02:31:55 +0000 (0:00:01.454) 0:01:16.015 ******* 2026-03-25 02:32:03.776379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 02:32:03.776397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 02:32:03.776412 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:03.776546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 02:32:03.776568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 02:32:03.776581 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:03.776595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 02:32:03.776609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 02:32:03.776623 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:03.776638 | orchestrator | 2026-03-25 02:32:03.776652 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-25 02:32:03.776665 | orchestrator | Wednesday 25 March 2026 02:31:57 +0000 (0:00:02.167) 0:01:18.182 ******* 2026-03-25 02:32:03.776679 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:03.776693 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:03.776707 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:03.776721 | orchestrator | 2026-03-25 02:32:03.776742 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-25 02:32:03.776758 | orchestrator | Wednesday 25 March 2026 02:31:57 +0000 (0:00:00.477) 0:01:18.659 ******* 2026-03-25 02:32:03.776772 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:03.776786 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:03.776800 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:03.776813 | orchestrator | 2026-03-25 02:32:03.776827 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-25 02:32:03.776841 | orchestrator | Wednesday 25 March 2026 02:31:59 +0000 (0:00:01.429) 0:01:20.089 ******* 2026-03-25 02:32:03.776855 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:32:03.776869 | orchestrator | 2026-03-25 02:32:03.776883 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-25 02:32:03.776896 | orchestrator | Wednesday 25 March 2026 02:32:00 +0000 (0:00:01.011) 0:01:21.100 ******* 2026-03-25 02:32:03.776948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 02:32:03.776987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:32:03.777002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 02:32:03.777016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 02:32:03.777029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 02:32:03.777052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:32:04.532768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 02:32:04.532888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 02:32:04.532904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:32:04.532915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 02:32:04.532925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 02:32:04.532955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 02:32:04.532974 | orchestrator | 2026-03-25 02:32:04.532994 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-25 02:32:04.533017 | orchestrator | Wednesday 25 March 2026 02:32:03 +0000 (0:00:03.593) 0:01:24.694 ******* 2026-03-25 02:32:04.533043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 02:32:04.533060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:32:04.533076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 02:32:04.533093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 02:32:04.533123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 02:32:11.042775 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:11.042926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:32:11.042953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 02:32:11.042967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 02:32:11.042978 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:11.042991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 02:32:11.043003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:32:11.043072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 02:32:11.043086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 02:32:11.043097 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:11.043109 | orchestrator | 2026-03-25 02:32:11.043120 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-25 02:32:11.043132 | orchestrator | Wednesday 25 March 2026 02:32:04 +0000 (0:00:00.774) 0:01:25.469 ******* 2026-03-25 02:32:11.043145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-25 02:32:11.043158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-25 02:32:11.043169 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:11.043185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-25 02:32:11.043201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-25 02:32:11.043212 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:11.043223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-25 02:32:11.043234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-25 02:32:11.043245 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:11.043256 | orchestrator | 2026-03-25 02:32:11.043266 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-25 02:32:11.043277 | orchestrator | Wednesday 25 March 2026 02:32:05 +0000 (0:00:01.227) 0:01:26.696 ******* 2026-03-25 02:32:11.043288 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:32:11.043308 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:32:11.043319 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:32:11.043329 | orchestrator | 2026-03-25 02:32:11.043340 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-25 02:32:11.043351 | orchestrator | Wednesday 25 March 2026 02:32:07 +0000 (0:00:01.346) 0:01:28.043 ******* 2026-03-25 02:32:11.043362 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:32:11.043373 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:32:11.043384 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:32:11.043394 | orchestrator | 2026-03-25 02:32:11.043405 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-25 02:32:11.043416 | orchestrator | Wednesday 25 March 2026 02:32:09 +0000 (0:00:02.090) 0:01:30.133 ******* 2026-03-25 02:32:11.043426 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:11.043530 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:11.043546 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:11.043557 | orchestrator | 2026-03-25 02:32:11.043568 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-25 02:32:11.043579 | orchestrator | Wednesday 25 March 2026 02:32:09 +0000 (0:00:00.323) 0:01:30.457 ******* 2026-03-25 02:32:11.043589 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:11.043600 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:11.043610 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:11.043621 | orchestrator | 2026-03-25 02:32:11.043631 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-25 02:32:11.043642 | orchestrator | Wednesday 25 March 2026 02:32:09 +0000 (0:00:00.348) 0:01:30.806 ******* 2026-03-25 02:32:11.043653 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:32:11.043663 | orchestrator | 2026-03-25 02:32:11.043674 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-25 02:32:11.043694 | orchestrator | Wednesday 25 March 2026 02:32:11 +0000 (0:00:01.057) 0:01:31.863 ******* 2026-03-25 02:32:14.525538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 02:32:14.525648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 02:32:14.525665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 02:32:14.525701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 02:32:14.525711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 02:32:14.525747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 02:32:14.525769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 02:32:14.525779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:32:14.525789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 02:32:14.525807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 02:32:14.525818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 02:32:14.525829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 02:32:14.525852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:32:15.688339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 02:32:15.688488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 02:32:15.688531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 02:32:15.688543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 02:32:15.688554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 02:32:15.688580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 02:32:15.688612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:32:15.688623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 02:32:15.688642 | orchestrator | 2026-03-25 02:32:15.688654 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-25 02:32:15.688664 | orchestrator | Wednesday 25 March 2026 02:32:14 +0000 (0:00:03.914) 0:01:35.777 ******* 2026-03-25 02:32:15.688674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 02:32:15.688685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 02:32:15.688695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 02:32:15.688706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 02:32:15.688723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 02:32:16.268049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:32:16.268160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 02:32:16.268173 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:16.268186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 02:32:16.268197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 02:32:16.268649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 02:32:16.268673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 02:32:16.268696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 02:32:16.268712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:32:16.268720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 02:32:16.268726 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:16.268733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 02:32:16.268739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 02:32:16.268745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 02:32:16.268760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 02:32:26.732631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 02:32:26.732754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:32:26.732774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 02:32:26.732790 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:26.732806 | orchestrator | 2026-03-25 02:32:26.732822 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-25 02:32:26.732838 | orchestrator | Wednesday 25 March 2026 02:32:16 +0000 (0:00:01.311) 0:01:37.088 ******* 2026-03-25 02:32:26.732853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-25 02:32:26.732870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-25 02:32:26.732884 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:26.732899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-25 02:32:26.732914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-25 02:32:26.732928 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:26.732942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-25 02:32:26.732987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-25 02:32:26.733001 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:26.733010 | orchestrator | 2026-03-25 02:32:26.733018 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-25 02:32:26.733026 | orchestrator | Wednesday 25 March 2026 02:32:17 +0000 (0:00:01.404) 0:01:38.493 ******* 2026-03-25 02:32:26.733035 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:32:26.733043 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:32:26.733051 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:32:26.733058 | orchestrator | 2026-03-25 02:32:26.733066 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-25 02:32:26.733074 | orchestrator | Wednesday 25 March 2026 02:32:18 +0000 (0:00:01.291) 0:01:39.784 ******* 2026-03-25 02:32:26.733081 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:32:26.733089 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:32:26.733096 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:32:26.733104 | orchestrator | 2026-03-25 02:32:26.733112 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-25 02:32:26.733120 | orchestrator | Wednesday 25 March 2026 02:32:21 +0000 (0:00:02.063) 0:01:41.847 ******* 2026-03-25 02:32:26.733146 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:26.733159 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:26.733172 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:26.733185 | orchestrator | 2026-03-25 02:32:26.733199 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-25 02:32:26.733213 | orchestrator | Wednesday 25 March 2026 02:32:21 +0000 (0:00:00.352) 0:01:42.200 ******* 2026-03-25 02:32:26.733227 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:32:26.733241 | orchestrator | 2026-03-25 02:32:26.733254 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-25 02:32:26.733267 | orchestrator | Wednesday 25 March 2026 02:32:22 +0000 (0:00:01.200) 0:01:43.400 ******* 2026-03-25 02:32:26.733291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 02:32:26.733310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 02:32:26.733352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 02:32:30.116676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 02:32:30.116869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 02:32:30.116920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 02:32:30.116945 | orchestrator | 2026-03-25 02:32:30.116958 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-25 02:32:30.116971 | orchestrator | Wednesday 25 March 2026 02:32:26 +0000 (0:00:04.275) 0:01:47.676 ******* 2026-03-25 02:32:30.116990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 02:32:30.117013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 02:32:34.264728 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:34.264812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 02:32:34.264833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 02:32:34.264855 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:34.264872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 02:32:34.264881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 02:32:34.264895 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:34.264899 | orchestrator | 2026-03-25 02:32:34.264904 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-25 02:32:34.264909 | orchestrator | Wednesday 25 March 2026 02:32:30 +0000 (0:00:03.382) 0:01:51.058 ******* 2026-03-25 02:32:34.264914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 02:32:34.264924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 02:32:43.310069 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:43.310184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 02:32:43.310204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 02:32:43.310218 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:43.310230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 02:32:43.310258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 02:32:43.310270 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:43.310282 | orchestrator | 2026-03-25 02:32:43.310294 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-25 02:32:43.310306 | orchestrator | Wednesday 25 March 2026 02:32:34 +0000 (0:00:04.028) 0:01:55.087 ******* 2026-03-25 02:32:43.310346 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:32:43.310358 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:32:43.310368 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:32:43.310379 | orchestrator | 2026-03-25 02:32:43.310390 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-25 02:32:43.310401 | orchestrator | Wednesday 25 March 2026 02:32:35 +0000 (0:00:01.355) 0:01:56.442 ******* 2026-03-25 02:32:43.310411 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:32:43.310422 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:32:43.310432 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:32:43.310469 | orchestrator | 2026-03-25 02:32:43.310480 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-25 02:32:43.310491 | orchestrator | Wednesday 25 March 2026 02:32:37 +0000 (0:00:02.096) 0:01:58.539 ******* 2026-03-25 02:32:43.310501 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:43.310512 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:43.310522 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:43.310533 | orchestrator | 2026-03-25 02:32:43.310544 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-25 02:32:43.310557 | orchestrator | Wednesday 25 March 2026 02:32:38 +0000 (0:00:00.338) 0:01:58.877 ******* 2026-03-25 02:32:43.310569 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:32:43.310581 | orchestrator | 2026-03-25 02:32:43.310593 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-25 02:32:43.310605 | orchestrator | Wednesday 25 March 2026 02:32:39 +0000 (0:00:01.295) 0:02:00.173 ******* 2026-03-25 02:32:43.310641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 02:32:43.310668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 02:32:43.310688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 02:32:43.310707 | orchestrator | 2026-03-25 02:32:43.310726 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-25 02:32:43.310760 | orchestrator | Wednesday 25 March 2026 02:32:42 +0000 (0:00:03.304) 0:02:03.477 ******* 2026-03-25 02:32:43.310782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-25 02:32:43.310801 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:43.310823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-25 02:32:43.310842 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:43.310862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-25 02:32:43.310967 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:43.310989 | orchestrator | 2026-03-25 02:32:43.311000 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-25 02:32:43.311010 | orchestrator | Wednesday 25 March 2026 02:32:43 +0000 (0:00:00.434) 0:02:03.912 ******* 2026-03-25 02:32:43.311022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-25 02:32:43.311047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-25 02:32:52.560963 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:52.561065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-25 02:32:52.561077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-25 02:32:52.561101 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:52.561108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-25 02:32:52.561123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-25 02:32:52.561149 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:52.561156 | orchestrator | 2026-03-25 02:32:52.561164 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-25 02:32:52.561172 | orchestrator | Wednesday 25 March 2026 02:32:44 +0000 (0:00:00.975) 0:02:04.888 ******* 2026-03-25 02:32:52.561178 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:32:52.561184 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:32:52.561190 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:32:52.561196 | orchestrator | 2026-03-25 02:32:52.561203 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-25 02:32:52.561211 | orchestrator | Wednesday 25 March 2026 02:32:45 +0000 (0:00:01.319) 0:02:06.207 ******* 2026-03-25 02:32:52.561222 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:32:52.561238 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:32:52.561251 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:32:52.561261 | orchestrator | 2026-03-25 02:32:52.561272 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-25 02:32:52.561298 | orchestrator | Wednesday 25 March 2026 02:32:47 +0000 (0:00:02.094) 0:02:08.302 ******* 2026-03-25 02:32:52.561310 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:52.561322 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:52.561334 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:52.561344 | orchestrator | 2026-03-25 02:32:52.561355 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-25 02:32:52.561361 | orchestrator | Wednesday 25 March 2026 02:32:47 +0000 (0:00:00.352) 0:02:08.654 ******* 2026-03-25 02:32:52.561368 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:32:52.561374 | orchestrator | 2026-03-25 02:32:52.561380 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-25 02:32:52.561386 | orchestrator | Wednesday 25 March 2026 02:32:49 +0000 (0:00:01.225) 0:02:09.880 ******* 2026-03-25 02:32:52.561417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 02:32:52.561469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 02:32:52.561495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 02:32:54.244036 | orchestrator | 2026-03-25 02:32:54.244146 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-25 02:32:54.244159 | orchestrator | Wednesday 25 March 2026 02:32:52 +0000 (0:00:03.499) 0:02:13.380 ******* 2026-03-25 02:32:54.244196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 02:32:54.244209 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:32:54.244236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 02:32:54.244266 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:32:54.244288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 02:32:54.244296 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:32:54.244303 | orchestrator | 2026-03-25 02:32:54.244311 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-25 02:32:54.244318 | orchestrator | Wednesday 25 March 2026 02:32:53 +0000 (0:00:00.688) 0:02:14.068 ******* 2026-03-25 02:32:54.244327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-25 02:32:54.244345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 02:32:54.244355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-25 02:32:54.244370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 02:33:03.268655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-25 02:33:03.268746 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:03.268758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-25 02:33:03.268767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 02:33:03.268791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-25 02:33:03.268802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 02:33:03.268817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-25 02:33:03.268830 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:03.268840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-25 02:33:03.268850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 02:33:03.268860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-25 02:33:03.268893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 02:33:03.268902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-25 02:33:03.268912 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:03.268921 | orchestrator | 2026-03-25 02:33:03.268931 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-25 02:33:03.268942 | orchestrator | Wednesday 25 March 2026 02:32:54 +0000 (0:00:00.997) 0:02:15.065 ******* 2026-03-25 02:33:03.268951 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:33:03.268960 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:33:03.268968 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:33:03.268977 | orchestrator | 2026-03-25 02:33:03.268986 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-25 02:33:03.268996 | orchestrator | Wednesday 25 March 2026 02:32:55 +0000 (0:00:01.663) 0:02:16.729 ******* 2026-03-25 02:33:03.269006 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:33:03.269015 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:33:03.269023 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:33:03.269032 | orchestrator | 2026-03-25 02:33:03.269041 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-25 02:33:03.269051 | orchestrator | Wednesday 25 March 2026 02:32:57 +0000 (0:00:02.071) 0:02:18.800 ******* 2026-03-25 02:33:03.269060 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:03.269069 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:03.269098 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:03.269108 | orchestrator | 2026-03-25 02:33:03.269118 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-25 02:33:03.269127 | orchestrator | Wednesday 25 March 2026 02:32:58 +0000 (0:00:00.338) 0:02:19.139 ******* 2026-03-25 02:33:03.269137 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:03.269146 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:03.269155 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:03.269165 | orchestrator | 2026-03-25 02:33:03.269174 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-25 02:33:03.269183 | orchestrator | Wednesday 25 March 2026 02:32:58 +0000 (0:00:00.325) 0:02:19.464 ******* 2026-03-25 02:33:03.269192 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:33:03.269200 | orchestrator | 2026-03-25 02:33:03.269209 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-25 02:33:03.269218 | orchestrator | Wednesday 25 March 2026 02:32:59 +0000 (0:00:01.304) 0:02:20.769 ******* 2026-03-25 02:33:03.269241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 02:33:03.269268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 02:33:03.269280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 02:33:03.269292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 02:33:03.269313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 02:33:03.985800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 02:33:03.985911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 02:33:03.985956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 02:33:03.985969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 02:33:03.985981 | orchestrator | 2026-03-25 02:33:03.985994 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-25 02:33:03.986006 | orchestrator | Wednesday 25 March 2026 02:33:03 +0000 (0:00:03.322) 0:02:24.091 ******* 2026-03-25 02:33:03.986111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 02:33:03.986137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 02:33:03.986150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 02:33:03.986171 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:03.986185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 02:33:03.986198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 02:33:03.986210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 02:33:03.986221 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:03.986247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 02:33:13.726601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 02:33:13.726693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 02:33:13.726704 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:13.726712 | orchestrator | 2026-03-25 02:33:13.726719 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-25 02:33:13.726727 | orchestrator | Wednesday 25 March 2026 02:33:03 +0000 (0:00:00.713) 0:02:24.804 ******* 2026-03-25 02:33:13.726734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-25 02:33:13.726743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-25 02:33:13.726750 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:13.726757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-25 02:33:13.726764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-25 02:33:13.726770 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:13.726777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-25 02:33:13.726787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-25 02:33:13.726798 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:13.726809 | orchestrator | 2026-03-25 02:33:13.726819 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-25 02:33:13.726829 | orchestrator | Wednesday 25 March 2026 02:33:05 +0000 (0:00:01.153) 0:02:25.958 ******* 2026-03-25 02:33:13.726839 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:33:13.726849 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:33:13.726886 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:33:13.726897 | orchestrator | 2026-03-25 02:33:13.726907 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-25 02:33:13.726916 | orchestrator | Wednesday 25 March 2026 02:33:06 +0000 (0:00:01.326) 0:02:27.285 ******* 2026-03-25 02:33:13.726926 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:33:13.726935 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:33:13.726945 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:33:13.726955 | orchestrator | 2026-03-25 02:33:13.726965 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-25 02:33:13.726975 | orchestrator | Wednesday 25 March 2026 02:33:08 +0000 (0:00:02.081) 0:02:29.366 ******* 2026-03-25 02:33:13.726985 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:13.727010 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:13.727017 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:13.727023 | orchestrator | 2026-03-25 02:33:13.727029 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-25 02:33:13.727052 | orchestrator | Wednesday 25 March 2026 02:33:08 +0000 (0:00:00.355) 0:02:29.722 ******* 2026-03-25 02:33:13.727059 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:33:13.727066 | orchestrator | 2026-03-25 02:33:13.727072 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-25 02:33:13.727078 | orchestrator | Wednesday 25 March 2026 02:33:10 +0000 (0:00:01.313) 0:02:31.035 ******* 2026-03-25 02:33:13.727087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 02:33:13.727098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 02:33:13.727107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:33:13.727122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:33:13.727138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 02:33:19.275399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:33:19.275582 | orchestrator | 2026-03-25 02:33:19.275608 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-25 02:33:19.275624 | orchestrator | Wednesday 25 March 2026 02:33:13 +0000 (0:00:03.508) 0:02:34.544 ******* 2026-03-25 02:33:19.275641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 02:33:19.275709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:33:19.275755 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:19.275780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 02:33:19.275815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:33:19.275831 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:19.275846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 02:33:19.275861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:33:19.275887 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:19.275901 | orchestrator | 2026-03-25 02:33:19.275914 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-25 02:33:19.275928 | orchestrator | Wednesday 25 March 2026 02:33:14 +0000 (0:00:00.718) 0:02:35.263 ******* 2026-03-25 02:33:19.275943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-25 02:33:19.275959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-25 02:33:19.275974 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:19.275987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-25 02:33:19.276000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-25 02:33:19.276014 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:19.276028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-25 02:33:19.276043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-25 02:33:19.276057 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:19.276071 | orchestrator | 2026-03-25 02:33:19.276087 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-25 02:33:19.276096 | orchestrator | Wednesday 25 March 2026 02:33:15 +0000 (0:00:00.922) 0:02:36.186 ******* 2026-03-25 02:33:19.276105 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:33:19.276114 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:33:19.276123 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:33:19.276132 | orchestrator | 2026-03-25 02:33:19.276140 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-25 02:33:19.276149 | orchestrator | Wednesday 25 March 2026 02:33:16 +0000 (0:00:01.643) 0:02:37.829 ******* 2026-03-25 02:33:19.276158 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:33:19.276167 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:33:19.276176 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:33:19.276185 | orchestrator | 2026-03-25 02:33:19.276194 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-25 02:33:19.276213 | orchestrator | Wednesday 25 March 2026 02:33:19 +0000 (0:00:02.264) 0:02:40.094 ******* 2026-03-25 02:33:24.017150 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:33:24.017258 | orchestrator | 2026-03-25 02:33:24.017274 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-25 02:33:24.017286 | orchestrator | Wednesday 25 March 2026 02:33:20 +0000 (0:00:01.185) 0:02:41.280 ******* 2026-03-25 02:33:24.017301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 02:33:24.017342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:33:24.017356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 02:33:24.017367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 02:33:24.017393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 02:33:24.017418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:33:24.017425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 02:33:24.017501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 02:33:24.017520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 02:33:24.017530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:33:24.017547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 02:33:24.017566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 02:33:25.098559 | orchestrator | 2026-03-25 02:33:25.098639 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-25 02:33:25.098647 | orchestrator | Wednesday 25 March 2026 02:33:24 +0000 (0:00:03.659) 0:02:44.939 ******* 2026-03-25 02:33:25.098673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 02:33:25.098681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:33:25.098686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 02:33:25.098691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 02:33:25.098695 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:25.098711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 02:33:25.098728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:33:25.098737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 02:33:25.098741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 02:33:25.098745 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:25.098749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 02:33:25.098753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:33:25.098760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 02:33:25.098769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 02:33:36.686004 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:36.686152 | orchestrator | 2026-03-25 02:33:36.686175 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-25 02:33:36.686196 | orchestrator | Wednesday 25 March 2026 02:33:25 +0000 (0:00:01.078) 0:02:46.018 ******* 2026-03-25 02:33:36.686209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-25 02:33:36.686224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-25 02:33:36.686238 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:36.686252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-25 02:33:36.686266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-25 02:33:36.686281 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:36.686294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-25 02:33:36.686308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-25 02:33:36.686321 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:36.686335 | orchestrator | 2026-03-25 02:33:36.686348 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-25 02:33:36.686362 | orchestrator | Wednesday 25 March 2026 02:33:26 +0000 (0:00:00.900) 0:02:46.919 ******* 2026-03-25 02:33:36.686376 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:33:36.686389 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:33:36.686401 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:33:36.686412 | orchestrator | 2026-03-25 02:33:36.686425 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-25 02:33:36.686482 | orchestrator | Wednesday 25 March 2026 02:33:27 +0000 (0:00:01.294) 0:02:48.214 ******* 2026-03-25 02:33:36.686498 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:33:36.686512 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:33:36.686525 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:33:36.686537 | orchestrator | 2026-03-25 02:33:36.686557 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-25 02:33:36.686573 | orchestrator | Wednesday 25 March 2026 02:33:29 +0000 (0:00:02.165) 0:02:50.380 ******* 2026-03-25 02:33:36.686587 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:33:36.686601 | orchestrator | 2026-03-25 02:33:36.686613 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-25 02:33:36.686627 | orchestrator | Wednesday 25 March 2026 02:33:31 +0000 (0:00:01.539) 0:02:51.919 ******* 2026-03-25 02:33:36.686640 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 02:33:36.686654 | orchestrator | 2026-03-25 02:33:36.686667 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-25 02:33:36.686704 | orchestrator | Wednesday 25 March 2026 02:33:34 +0000 (0:00:03.179) 0:02:55.099 ******* 2026-03-25 02:33:36.686756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:33:36.686778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 02:33:36.686792 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:36.686813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:33:36.686838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 02:33:36.686852 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:36.686876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:33:39.005759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 02:33:39.005822 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:39.005831 | orchestrator | 2026-03-25 02:33:39.005839 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-25 02:33:39.005847 | orchestrator | Wednesday 25 March 2026 02:33:36 +0000 (0:00:02.408) 0:02:57.507 ******* 2026-03-25 02:33:39.005880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:33:39.005889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 02:33:39.005896 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:39.005915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:33:39.005933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 02:33:39.005941 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:39.005948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:33:39.005959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 02:33:48.972042 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:48.972141 | orchestrator | 2026-03-25 02:33:48.972152 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-25 02:33:48.972165 | orchestrator | Wednesday 25 March 2026 02:33:38 +0000 (0:00:02.319) 0:02:59.826 ******* 2026-03-25 02:33:48.972174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 02:33:48.972204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 02:33:48.972225 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:48.972233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 02:33:48.972241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 02:33:48.972249 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:48.972256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 02:33:48.972264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 02:33:48.972270 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:48.972277 | orchestrator | 2026-03-25 02:33:48.972284 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-25 02:33:48.972290 | orchestrator | Wednesday 25 March 2026 02:33:41 +0000 (0:00:02.824) 0:03:02.651 ******* 2026-03-25 02:33:48.972297 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:33:48.972324 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:33:48.972332 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:33:48.972340 | orchestrator | 2026-03-25 02:33:48.972346 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-25 02:33:48.972353 | orchestrator | Wednesday 25 March 2026 02:33:43 +0000 (0:00:02.017) 0:03:04.668 ******* 2026-03-25 02:33:48.972359 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:48.972363 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:48.972367 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:48.972371 | orchestrator | 2026-03-25 02:33:48.972375 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-25 02:33:48.972379 | orchestrator | Wednesday 25 March 2026 02:33:45 +0000 (0:00:01.592) 0:03:06.261 ******* 2026-03-25 02:33:48.972383 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:48.972387 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:48.972391 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:48.972395 | orchestrator | 2026-03-25 02:33:48.972399 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-25 02:33:48.972403 | orchestrator | Wednesday 25 March 2026 02:33:45 +0000 (0:00:00.368) 0:03:06.629 ******* 2026-03-25 02:33:48.972410 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:33:48.972417 | orchestrator | 2026-03-25 02:33:48.972423 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-25 02:33:48.972429 | orchestrator | Wednesday 25 March 2026 02:33:47 +0000 (0:00:01.496) 0:03:08.126 ******* 2026-03-25 02:33:48.972463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-25 02:33:48.972475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-25 02:33:48.972482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-25 02:33:48.972490 | orchestrator | 2026-03-25 02:33:48.972497 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-25 02:33:48.972511 | orchestrator | Wednesday 25 March 2026 02:33:48 +0000 (0:00:01.506) 0:03:09.632 ******* 2026-03-25 02:33:48.972525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-25 02:33:58.208626 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:58.208754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-25 02:33:58.208774 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:58.208785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-25 02:33:58.208795 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:58.208804 | orchestrator | 2026-03-25 02:33:58.208814 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-25 02:33:58.208824 | orchestrator | Wednesday 25 March 2026 02:33:49 +0000 (0:00:00.462) 0:03:10.094 ******* 2026-03-25 02:33:58.208835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-25 02:33:58.208845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-25 02:33:58.208854 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:58.208863 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:58.208872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-25 02:33:58.208904 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:58.208913 | orchestrator | 2026-03-25 02:33:58.208961 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-25 02:33:58.208971 | orchestrator | Wednesday 25 March 2026 02:33:50 +0000 (0:00:00.964) 0:03:11.059 ******* 2026-03-25 02:33:58.208980 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:58.208988 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:58.208997 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:58.209006 | orchestrator | 2026-03-25 02:33:58.209014 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-25 02:33:58.209023 | orchestrator | Wednesday 25 March 2026 02:33:50 +0000 (0:00:00.550) 0:03:11.609 ******* 2026-03-25 02:33:58.209032 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:58.209040 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:58.209049 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:58.209057 | orchestrator | 2026-03-25 02:33:58.209066 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-25 02:33:58.209074 | orchestrator | Wednesday 25 March 2026 02:33:52 +0000 (0:00:01.401) 0:03:13.011 ******* 2026-03-25 02:33:58.209083 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:33:58.209094 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:33:58.209103 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:33:58.209113 | orchestrator | 2026-03-25 02:33:58.209123 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-25 02:33:58.209133 | orchestrator | Wednesday 25 March 2026 02:33:52 +0000 (0:00:00.338) 0:03:13.349 ******* 2026-03-25 02:33:58.209142 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:33:58.209152 | orchestrator | 2026-03-25 02:33:58.209163 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-25 02:33:58.209173 | orchestrator | Wednesday 25 March 2026 02:33:54 +0000 (0:00:01.622) 0:03:14.971 ******* 2026-03-25 02:33:58.209202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 02:33:58.209220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.209233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.209252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.209263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-25 02:33:58.209282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.340283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:58.340402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:58.340420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.340494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:33:58.340509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 02:33:58.340521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.340549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-25 02:33:58.340566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.340584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:58.340594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.340606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.340616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.340640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 02:33:58.547578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-25 02:33:58.547680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:33:58.547692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.547700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 02:33:58.547707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.547747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.547768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.547777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-25 02:33:58.547787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.547799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:58.547810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:58.547824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:58.547849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.749684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:58.749797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:33:58.749815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.749826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.749856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:33:58.749886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-25 02:33:58.749926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.749943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:58.749955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-25 02:33:58.749967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:58.749980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:58.750001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 02:33:58.750120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:59.978963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:33:59.979061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 02:33:59.979077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:33:59.979083 | orchestrator | 2026-03-25 02:33:59.979088 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-25 02:33:59.979117 | orchestrator | Wednesday 25 March 2026 02:33:58 +0000 (0:00:04.601) 0:03:19.572 ******* 2026-03-25 02:33:59.979139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 02:33:59.979166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:59.979176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:59.979184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:59.979192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-25 02:33:59.979234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:33:59.979245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:59.979254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:33:59.979268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.086537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:34:00.086619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.086647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-25 02:34:00.086667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 02:34:00.086675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:34:00.086699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.086713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.086728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.086755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 02:34:00.086769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.086780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:34:00.086805 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:00.086835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-25 02:34:00.190937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.191083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 02:34:00.191123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:34:00.191145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.191161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:34:00.191178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.191215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.191243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.191260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:34:00.191277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-25 02:34:00.191293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.191373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.318635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-25 02:34:00.318728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:34:00.318742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:34:00.318766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:34:00.318774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.318782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.318806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 02:34:00.318836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:34:00.318849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:34:00.318854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:00.318861 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:00.318871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-25 02:34:00.318882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-25 02:34:00.318895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 02:34:11.750627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 02:34:11.750763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 02:34:11.750783 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:11.750794 | orchestrator | 2026-03-25 02:34:11.750804 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-25 02:34:11.750814 | orchestrator | Wednesday 25 March 2026 02:34:00 +0000 (0:00:01.659) 0:03:21.232 ******* 2026-03-25 02:34:11.750824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-25 02:34:11.750835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-25 02:34:11.750844 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:11.750852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-25 02:34:11.750860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-25 02:34:11.750869 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:11.750878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-25 02:34:11.750887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-25 02:34:11.750918 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:11.750927 | orchestrator | 2026-03-25 02:34:11.750936 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-25 02:34:11.750944 | orchestrator | Wednesday 25 March 2026 02:34:02 +0000 (0:00:02.204) 0:03:23.436 ******* 2026-03-25 02:34:11.750952 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:34:11.750961 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:34:11.750968 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:34:11.750973 | orchestrator | 2026-03-25 02:34:11.750978 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-25 02:34:11.750983 | orchestrator | Wednesday 25 March 2026 02:34:03 +0000 (0:00:01.335) 0:03:24.772 ******* 2026-03-25 02:34:11.750988 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:34:11.750993 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:34:11.750998 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:34:11.751003 | orchestrator | 2026-03-25 02:34:11.751008 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-25 02:34:11.751013 | orchestrator | Wednesday 25 March 2026 02:34:06 +0000 (0:00:02.194) 0:03:26.966 ******* 2026-03-25 02:34:11.751019 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:34:11.751024 | orchestrator | 2026-03-25 02:34:11.751029 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-25 02:34:11.751052 | orchestrator | Wednesday 25 March 2026 02:34:07 +0000 (0:00:01.407) 0:03:28.374 ******* 2026-03-25 02:34:11.751063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 02:34:11.751079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 02:34:11.751091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 02:34:11.751106 | orchestrator | 2026-03-25 02:34:11.751115 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-25 02:34:11.751125 | orchestrator | Wednesday 25 March 2026 02:34:11 +0000 (0:00:03.610) 0:03:31.985 ******* 2026-03-25 02:34:11.751135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 02:34:11.751145 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:11.751164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 02:34:22.527543 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:22.527733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 02:34:22.528483 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:22.528514 | orchestrator | 2026-03-25 02:34:22.528536 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-25 02:34:22.528556 | orchestrator | Wednesday 25 March 2026 02:34:11 +0000 (0:00:00.585) 0:03:32.571 ******* 2026-03-25 02:34:22.528577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-25 02:34:22.528635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-25 02:34:22.528658 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:22.528677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-25 02:34:22.528696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-25 02:34:22.528715 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:22.528734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-25 02:34:22.528753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-25 02:34:22.528771 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:22.528789 | orchestrator | 2026-03-25 02:34:22.528807 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-25 02:34:22.528825 | orchestrator | Wednesday 25 March 2026 02:34:12 +0000 (0:00:00.859) 0:03:33.430 ******* 2026-03-25 02:34:22.528843 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:34:22.528861 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:34:22.528879 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:34:22.528897 | orchestrator | 2026-03-25 02:34:22.528916 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-25 02:34:22.528933 | orchestrator | Wednesday 25 March 2026 02:34:14 +0000 (0:00:01.918) 0:03:35.349 ******* 2026-03-25 02:34:22.528952 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:34:22.528971 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:34:22.528991 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:34:22.529010 | orchestrator | 2026-03-25 02:34:22.529029 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-25 02:34:22.529049 | orchestrator | Wednesday 25 March 2026 02:34:16 +0000 (0:00:01.889) 0:03:37.238 ******* 2026-03-25 02:34:22.529069 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:34:22.529083 | orchestrator | 2026-03-25 02:34:22.529101 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-25 02:34:22.529113 | orchestrator | Wednesday 25 March 2026 02:34:18 +0000 (0:00:01.678) 0:03:38.916 ******* 2026-03-25 02:34:22.529158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 02:34:22.529208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:34:22.529232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:34:22.529252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 02:34:22.529286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 02:34:23.579696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:34:23.579810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:34:23.579824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:34:23.579836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:34:23.579847 | orchestrator | 2026-03-25 02:34:23.579859 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-25 02:34:23.579870 | orchestrator | Wednesday 25 March 2026 02:34:22 +0000 (0:00:04.431) 0:03:43.348 ******* 2026-03-25 02:34:23.579882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 02:34:23.579935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:34:23.579956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:34:23.579966 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:23.579978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 02:34:23.579989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:34:23.579999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:34:23.580009 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:23.580032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 02:34:37.471634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 02:34:37.471747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 02:34:37.471760 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:37.471769 | orchestrator | 2026-03-25 02:34:37.471777 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-25 02:34:37.471785 | orchestrator | Wednesday 25 March 2026 02:34:23 +0000 (0:00:01.053) 0:03:44.401 ******* 2026-03-25 02:34:37.471794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471827 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:37.471835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471886 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:37.471892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-25 02:34:37.471951 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:37.471958 | orchestrator | 2026-03-25 02:34:37.471964 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-25 02:34:37.471969 | orchestrator | Wednesday 25 March 2026 02:34:25 +0000 (0:00:01.446) 0:03:45.848 ******* 2026-03-25 02:34:37.471981 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:34:37.471988 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:34:37.471997 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:34:37.472005 | orchestrator | 2026-03-25 02:34:37.472012 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-25 02:34:37.472019 | orchestrator | Wednesday 25 March 2026 02:34:26 +0000 (0:00:01.441) 0:03:47.289 ******* 2026-03-25 02:34:37.472025 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:34:37.472031 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:34:37.472038 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:34:37.472044 | orchestrator | 2026-03-25 02:34:37.472050 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-25 02:34:37.472057 | orchestrator | Wednesday 25 March 2026 02:34:28 +0000 (0:00:02.132) 0:03:49.421 ******* 2026-03-25 02:34:37.472063 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:34:37.472068 | orchestrator | 2026-03-25 02:34:37.472075 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-25 02:34:37.472081 | orchestrator | Wednesday 25 March 2026 02:34:30 +0000 (0:00:01.814) 0:03:51.236 ******* 2026-03-25 02:34:37.472088 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-25 02:34:37.472096 | orchestrator | 2026-03-25 02:34:37.472102 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-25 02:34:37.472108 | orchestrator | Wednesday 25 March 2026 02:34:31 +0000 (0:00:00.980) 0:03:52.217 ******* 2026-03-25 02:34:37.472116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-25 02:34:37.472136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-25 02:34:37.472141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-25 02:34:37.472147 | orchestrator | 2026-03-25 02:34:37.472154 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-25 02:34:37.472161 | orchestrator | Wednesday 25 March 2026 02:34:35 +0000 (0:00:04.360) 0:03:56.578 ******* 2026-03-25 02:34:37.472167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 02:34:37.472174 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:37.472194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 02:34:58.917824 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:58.917932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 02:34:58.917944 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:58.917951 | orchestrator | 2026-03-25 02:34:58.917958 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-25 02:34:58.917967 | orchestrator | Wednesday 25 March 2026 02:34:37 +0000 (0:00:01.711) 0:03:58.289 ******* 2026-03-25 02:34:58.917975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 02:34:58.917983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 02:34:58.918083 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:58.918093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 02:34:58.918100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 02:34:58.918107 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:58.918114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 02:34:58.918121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 02:34:58.918127 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:58.918133 | orchestrator | 2026-03-25 02:34:58.918139 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-25 02:34:58.918145 | orchestrator | Wednesday 25 March 2026 02:34:39 +0000 (0:00:01.780) 0:04:00.070 ******* 2026-03-25 02:34:58.918152 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:34:58.918158 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:34:58.918165 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:34:58.918170 | orchestrator | 2026-03-25 02:34:58.918176 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-25 02:34:58.918182 | orchestrator | Wednesday 25 March 2026 02:34:42 +0000 (0:00:02.947) 0:04:03.018 ******* 2026-03-25 02:34:58.918188 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:34:58.918194 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:34:58.918200 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:34:58.918206 | orchestrator | 2026-03-25 02:34:58.918212 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-25 02:34:58.918218 | orchestrator | Wednesday 25 March 2026 02:34:45 +0000 (0:00:03.460) 0:04:06.478 ******* 2026-03-25 02:34:58.918227 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-25 02:34:58.918234 | orchestrator | 2026-03-25 02:34:58.918241 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-25 02:34:58.918248 | orchestrator | Wednesday 25 March 2026 02:34:46 +0000 (0:00:01.262) 0:04:07.741 ******* 2026-03-25 02:34:58.918271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 02:34:58.918278 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:58.918301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 02:34:58.918317 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:58.918323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 02:34:58.918330 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:58.918337 | orchestrator | 2026-03-25 02:34:58.918343 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-25 02:34:58.918349 | orchestrator | Wednesday 25 March 2026 02:34:48 +0000 (0:00:01.228) 0:04:08.969 ******* 2026-03-25 02:34:58.918356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 02:34:58.918363 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:58.918370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 02:34:58.918376 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:58.918383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 02:34:58.918390 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:58.918396 | orchestrator | 2026-03-25 02:34:58.918403 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-25 02:34:58.918409 | orchestrator | Wednesday 25 March 2026 02:34:49 +0000 (0:00:01.506) 0:04:10.476 ******* 2026-03-25 02:34:58.918416 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:34:58.918422 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:34:58.918429 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:34:58.918435 | orchestrator | 2026-03-25 02:34:58.918441 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-25 02:34:58.918447 | orchestrator | Wednesday 25 March 2026 02:34:51 +0000 (0:00:01.826) 0:04:12.303 ******* 2026-03-25 02:34:58.918453 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:34:58.918480 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:34:58.918487 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:34:58.918493 | orchestrator | 2026-03-25 02:34:58.918500 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-25 02:34:58.918506 | orchestrator | Wednesday 25 March 2026 02:34:54 +0000 (0:00:03.020) 0:04:15.324 ******* 2026-03-25 02:34:58.918518 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:34:58.918525 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:34:58.918531 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:34:58.918538 | orchestrator | 2026-03-25 02:34:58.918549 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-25 02:34:58.918556 | orchestrator | Wednesday 25 March 2026 02:34:57 +0000 (0:00:03.079) 0:04:18.403 ******* 2026-03-25 02:34:58.918563 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-25 02:34:58.918570 | orchestrator | 2026-03-25 02:34:58.918581 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-25 02:35:15.268728 | orchestrator | Wednesday 25 March 2026 02:34:58 +0000 (0:00:01.332) 0:04:19.736 ******* 2026-03-25 02:35:15.268836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 02:35:15.268851 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:15.268861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 02:35:15.268868 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:15.268876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 02:35:15.268883 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:15.268890 | orchestrator | 2026-03-25 02:35:15.268899 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-25 02:35:15.268907 | orchestrator | Wednesday 25 March 2026 02:35:00 +0000 (0:00:01.392) 0:04:21.128 ******* 2026-03-25 02:35:15.268914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 02:35:15.268920 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:15.268927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 02:35:15.268955 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:15.268963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 02:35:15.268971 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:15.268977 | orchestrator | 2026-03-25 02:35:15.268998 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-25 02:35:15.269006 | orchestrator | Wednesday 25 March 2026 02:35:01 +0000 (0:00:01.515) 0:04:22.644 ******* 2026-03-25 02:35:15.269013 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:15.269019 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:15.269026 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:15.269032 | orchestrator | 2026-03-25 02:35:15.269039 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-25 02:35:15.269073 | orchestrator | Wednesday 25 March 2026 02:35:03 +0000 (0:00:02.068) 0:04:24.712 ******* 2026-03-25 02:35:15.269081 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:35:15.269088 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:35:15.269095 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:35:15.269101 | orchestrator | 2026-03-25 02:35:15.269107 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-25 02:35:15.269113 | orchestrator | Wednesday 25 March 2026 02:35:06 +0000 (0:00:02.597) 0:04:27.310 ******* 2026-03-25 02:35:15.269119 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:35:15.269125 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:35:15.269131 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:35:15.269138 | orchestrator | 2026-03-25 02:35:15.269144 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-25 02:35:15.269150 | orchestrator | Wednesday 25 March 2026 02:35:09 +0000 (0:00:03.465) 0:04:30.776 ******* 2026-03-25 02:35:15.269156 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:35:15.269163 | orchestrator | 2026-03-25 02:35:15.269169 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-25 02:35:15.269176 | orchestrator | Wednesday 25 March 2026 02:35:11 +0000 (0:00:01.485) 0:04:32.261 ******* 2026-03-25 02:35:15.269184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 02:35:15.269192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 02:35:15.269209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 02:35:15.269218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 02:35:15.269239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 02:35:16.018432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:35:16.018702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 02:35:16.018737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 02:35:16.018807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 02:35:16.018835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 02:35:16.018858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:35:16.018908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 02:35:16.018931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 02:35:16.019009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 02:35:16.019051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:35:16.019071 | orchestrator | 2026-03-25 02:35:16.019092 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-25 02:35:16.019113 | orchestrator | Wednesday 25 March 2026 02:35:15 +0000 (0:00:03.973) 0:04:36.234 ******* 2026-03-25 02:35:16.019144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 02:35:16.019164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 02:35:16.019198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 02:35:17.161386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 02:35:17.161519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:35:17.161556 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:17.161566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 02:35:17.161576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 02:35:17.161595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 02:35:17.161599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 02:35:17.161618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:35:17.161634 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:17.161643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 02:35:17.161649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 02:35:17.161656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 02:35:17.161667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 02:35:17.161673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 02:35:17.161678 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:17.161684 | orchestrator | 2026-03-25 02:35:17.161690 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-25 02:35:17.161696 | orchestrator | Wednesday 25 March 2026 02:35:16 +0000 (0:00:00.753) 0:04:36.988 ******* 2026-03-25 02:35:17.161710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 02:35:29.395948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 02:35:29.396086 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:29.396112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 02:35:29.396129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 02:35:29.396146 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:29.396162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 02:35:29.396179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 02:35:29.396194 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:29.396208 | orchestrator | 2026-03-25 02:35:29.396224 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-25 02:35:29.396240 | orchestrator | Wednesday 25 March 2026 02:35:17 +0000 (0:00:00.995) 0:04:37.983 ******* 2026-03-25 02:35:29.396254 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:35:29.396269 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:35:29.396283 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:35:29.396297 | orchestrator | 2026-03-25 02:35:29.396311 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-25 02:35:29.396326 | orchestrator | Wednesday 25 March 2026 02:35:18 +0000 (0:00:01.783) 0:04:39.766 ******* 2026-03-25 02:35:29.396340 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:35:29.396354 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:35:29.396368 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:35:29.396382 | orchestrator | 2026-03-25 02:35:29.396396 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-25 02:35:29.396410 | orchestrator | Wednesday 25 March 2026 02:35:21 +0000 (0:00:02.250) 0:04:42.017 ******* 2026-03-25 02:35:29.396426 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:35:29.396442 | orchestrator | 2026-03-25 02:35:29.396457 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-25 02:35:29.396505 | orchestrator | Wednesday 25 March 2026 02:35:22 +0000 (0:00:01.505) 0:04:43.522 ******* 2026-03-25 02:35:29.396546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:35:29.396584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:35:29.396659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:35:29.396682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:35:29.396710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:35:29.396728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:35:29.396755 | orchestrator | 2026-03-25 02:35:29.396770 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-25 02:35:29.396788 | orchestrator | Wednesday 25 March 2026 02:35:28 +0000 (0:00:05.525) 0:04:49.047 ******* 2026-03-25 02:35:29.396816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-25 02:35:34.668767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-25 02:35:34.668865 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:34.668888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-25 02:35:34.668898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-25 02:35:34.668925 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:34.668932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-25 02:35:34.668952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-25 02:35:34.668959 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:34.668965 | orchestrator | 2026-03-25 02:35:34.668971 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-25 02:35:34.668979 | orchestrator | Wednesday 25 March 2026 02:35:29 +0000 (0:00:01.167) 0:04:50.215 ******* 2026-03-25 02:35:34.668986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-25 02:35:34.668993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-25 02:35:34.669002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-25 02:35:34.669015 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:34.669025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-25 02:35:34.669041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-25 02:35:34.669047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-25 02:35:34.669061 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:34.669067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-25 02:35:34.669073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-25 02:35:34.669078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-25 02:35:34.669084 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:34.669090 | orchestrator | 2026-03-25 02:35:34.669096 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-25 02:35:34.669101 | orchestrator | Wednesday 25 March 2026 02:35:30 +0000 (0:00:00.986) 0:04:51.202 ******* 2026-03-25 02:35:34.669107 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:34.669113 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:34.669118 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:34.669124 | orchestrator | 2026-03-25 02:35:34.669129 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-25 02:35:34.669135 | orchestrator | Wednesday 25 March 2026 02:35:30 +0000 (0:00:00.492) 0:04:51.694 ******* 2026-03-25 02:35:34.669141 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:34.669146 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:34.669152 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:34.669157 | orchestrator | 2026-03-25 02:35:34.669163 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-25 02:35:34.669169 | orchestrator | Wednesday 25 March 2026 02:35:32 +0000 (0:00:01.904) 0:04:53.599 ******* 2026-03-25 02:35:34.669179 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:35:37.175315 | orchestrator | 2026-03-25 02:35:37.175390 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-25 02:35:37.175398 | orchestrator | Wednesday 25 March 2026 02:35:34 +0000 (0:00:01.892) 0:04:55.491 ******* 2026-03-25 02:35:37.175404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-25 02:35:37.175431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 02:35:37.175447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:37.175452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:37.175457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-25 02:35:37.175461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 02:35:37.175560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 02:35:37.175569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:37.175583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:37.175589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 02:35:37.175599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-25 02:35:37.175605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 02:35:37.175611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:37.175623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:38.718758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 02:35:38.718877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-25 02:35:38.718912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-25 02:35:38.718930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:38.718946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:38.718961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 02:35:38.718998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-25 02:35:38.719024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-25 02:35:38.719046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-25 02:35:38.719062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:38.719078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:38.719102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-25 02:35:39.504616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.504706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 02:35:39.504735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.504745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 02:35:39.504752 | orchestrator | 2026-03-25 02:35:39.504761 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-25 02:35:39.504769 | orchestrator | Wednesday 25 March 2026 02:35:38 +0000 (0:00:04.229) 0:04:59.721 ******* 2026-03-25 02:35:39.504777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-25 02:35:39.504785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 02:35:39.504821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.504846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.504855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 02:35:39.504869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-25 02:35:39.504878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-25 02:35:39.504886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-25 02:35:39.504904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 02:35:39.718157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.718316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.718336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.718368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.718456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 02:35:39.718504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 02:35:39.718543 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:39.718583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-25 02:35:39.718601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-25 02:35:39.718622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.718635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:39.718648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 02:35:39.718661 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:39.718675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-25 02:35:39.718697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 02:35:39.718720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:42.406830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:42.406970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 02:35:42.406992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-25 02:35:42.407009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-25 02:35:42.407050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:42.407063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 02:35:42.407094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 02:35:42.407128 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:42.407153 | orchestrator | 2026-03-25 02:35:42.407166 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-25 02:35:42.407178 | orchestrator | Wednesday 25 March 2026 02:35:39 +0000 (0:00:00.974) 0:05:00.695 ******* 2026-03-25 02:35:42.407196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-25 02:35:42.407211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-25 02:35:42.407226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-25 02:35:42.407240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-25 02:35:42.407253 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:42.407264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-25 02:35:42.407285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-25 02:35:42.407304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-25 02:35:42.407325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-25 02:35:42.407344 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:42.407367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-25 02:35:42.407389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-25 02:35:42.407410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-25 02:35:42.407429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-25 02:35:42.407440 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:42.407452 | orchestrator | 2026-03-25 02:35:42.407463 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-25 02:35:42.407503 | orchestrator | Wednesday 25 March 2026 02:35:41 +0000 (0:00:01.995) 0:05:02.691 ******* 2026-03-25 02:35:42.407516 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:42.407535 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:51.860142 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:51.860274 | orchestrator | 2026-03-25 02:35:51.860290 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-25 02:35:51.860302 | orchestrator | Wednesday 25 March 2026 02:35:42 +0000 (0:00:00.543) 0:05:03.234 ******* 2026-03-25 02:35:51.860313 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:51.860323 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:51.860333 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:51.860343 | orchestrator | 2026-03-25 02:35:51.860353 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-25 02:35:51.860362 | orchestrator | Wednesday 25 March 2026 02:35:43 +0000 (0:00:01.466) 0:05:04.701 ******* 2026-03-25 02:35:51.860372 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:35:51.860382 | orchestrator | 2026-03-25 02:35:51.860391 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-25 02:35:51.860401 | orchestrator | Wednesday 25 March 2026 02:35:45 +0000 (0:00:01.998) 0:05:06.700 ******* 2026-03-25 02:35:51.860414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:35:51.860457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:35:51.860553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:35:51.860569 | orchestrator | 2026-03-25 02:35:51.860579 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-25 02:35:51.860590 | orchestrator | Wednesday 25 March 2026 02:35:48 +0000 (0:00:02.315) 0:05:09.015 ******* 2026-03-25 02:35:51.860629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 02:35:51.860651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 02:35:51.860664 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:51.860676 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:51.860688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 02:35:51.860700 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:51.860711 | orchestrator | 2026-03-25 02:35:51.860723 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-25 02:35:51.860732 | orchestrator | Wednesday 25 March 2026 02:35:48 +0000 (0:00:00.438) 0:05:09.454 ******* 2026-03-25 02:35:51.860744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-25 02:35:51.860756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-25 02:35:51.860765 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:51.860775 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:51.860785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-25 02:35:51.860794 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:51.860804 | orchestrator | 2026-03-25 02:35:51.860814 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-25 02:35:51.860823 | orchestrator | Wednesday 25 March 2026 02:35:49 +0000 (0:00:00.749) 0:05:10.203 ******* 2026-03-25 02:35:51.860833 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:35:51.860843 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:35:51.860857 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:35:51.860874 | orchestrator | 2026-03-25 02:35:51.860900 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-25 02:35:51.860920 | orchestrator | Wednesday 25 March 2026 02:35:50 +0000 (0:00:00.909) 0:05:11.113 ******* 2026-03-25 02:35:51.860948 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:01.471670 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:01.471776 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:01.471788 | orchestrator | 2026-03-25 02:36:01.471797 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-25 02:36:01.471804 | orchestrator | Wednesday 25 March 2026 02:35:51 +0000 (0:00:01.570) 0:05:12.684 ******* 2026-03-25 02:36:01.471812 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:36:01.471820 | orchestrator | 2026-03-25 02:36:01.471827 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-25 02:36:01.471833 | orchestrator | Wednesday 25 March 2026 02:35:53 +0000 (0:00:01.688) 0:05:14.372 ******* 2026-03-25 02:36:01.471857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 02:36:01.471870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 02:36:01.471877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 02:36:01.471902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 02:36:01.471936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 02:36:01.471944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 02:36:01.471950 | orchestrator | 2026-03-25 02:36:01.471957 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-25 02:36:01.471965 | orchestrator | Wednesday 25 March 2026 02:36:00 +0000 (0:00:06.769) 0:05:21.141 ******* 2026-03-25 02:36:01.471972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-25 02:36:01.471979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-25 02:36:01.471996 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:07.600162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-25 02:36:07.600272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-25 02:36:07.600294 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:07.600313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-25 02:36:07.600332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-25 02:36:07.600378 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:07.600393 | orchestrator | 2026-03-25 02:36:07.600403 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-25 02:36:07.600413 | orchestrator | Wednesday 25 March 2026 02:36:01 +0000 (0:00:01.153) 0:05:22.295 ******* 2026-03-25 02:36:07.600439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600545 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:07.600554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600589 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:07.600598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-25 02:36:07.600634 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:07.600653 | orchestrator | 2026-03-25 02:36:07.600685 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-25 02:36:07.600703 | orchestrator | Wednesday 25 March 2026 02:36:02 +0000 (0:00:01.018) 0:05:23.314 ******* 2026-03-25 02:36:07.600722 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:36:07.600738 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:36:07.600753 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:36:07.600768 | orchestrator | 2026-03-25 02:36:07.600783 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-25 02:36:07.600797 | orchestrator | Wednesday 25 March 2026 02:36:03 +0000 (0:00:01.289) 0:05:24.603 ******* 2026-03-25 02:36:07.600811 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:36:07.600825 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:36:07.600840 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:36:07.600853 | orchestrator | 2026-03-25 02:36:07.600867 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-25 02:36:07.600881 | orchestrator | Wednesday 25 March 2026 02:36:06 +0000 (0:00:02.376) 0:05:26.980 ******* 2026-03-25 02:36:07.600895 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:07.600910 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:07.600926 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:07.600938 | orchestrator | 2026-03-25 02:36:07.600948 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-25 02:36:07.600958 | orchestrator | Wednesday 25 March 2026 02:36:06 +0000 (0:00:00.727) 0:05:27.707 ******* 2026-03-25 02:36:07.600968 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:07.600978 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:07.600988 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:07.600998 | orchestrator | 2026-03-25 02:36:07.601008 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-25 02:36:07.601018 | orchestrator | Wednesday 25 March 2026 02:36:07 +0000 (0:00:00.347) 0:05:28.055 ******* 2026-03-25 02:36:07.601026 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:07.601036 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:07.601061 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.951905 | orchestrator | 2026-03-25 02:36:54.952025 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-25 02:36:54.952043 | orchestrator | Wednesday 25 March 2026 02:36:07 +0000 (0:00:00.372) 0:05:28.427 ******* 2026-03-25 02:36:54.952055 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:54.952067 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:54.952079 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.952090 | orchestrator | 2026-03-25 02:36:54.952101 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-25 02:36:54.952112 | orchestrator | Wednesday 25 March 2026 02:36:07 +0000 (0:00:00.372) 0:05:28.799 ******* 2026-03-25 02:36:54.952123 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:54.952134 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:54.952145 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.952155 | orchestrator | 2026-03-25 02:36:54.952167 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-25 02:36:54.952196 | orchestrator | Wednesday 25 March 2026 02:36:08 +0000 (0:00:00.802) 0:05:29.602 ******* 2026-03-25 02:36:54.952208 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:54.952219 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:54.952230 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.952241 | orchestrator | 2026-03-25 02:36:54.952251 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-25 02:36:54.952262 | orchestrator | Wednesday 25 March 2026 02:36:09 +0000 (0:00:00.620) 0:05:30.222 ******* 2026-03-25 02:36:54.952273 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:36:54.952285 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:36:54.952295 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:36:54.952306 | orchestrator | 2026-03-25 02:36:54.952317 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-25 02:36:54.952350 | orchestrator | Wednesday 25 March 2026 02:36:10 +0000 (0:00:00.726) 0:05:30.949 ******* 2026-03-25 02:36:54.952361 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:36:54.952372 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:36:54.952383 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:36:54.952393 | orchestrator | 2026-03-25 02:36:54.952404 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-25 02:36:54.952415 | orchestrator | Wednesday 25 March 2026 02:36:10 +0000 (0:00:00.773) 0:05:31.722 ******* 2026-03-25 02:36:54.952428 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:36:54.952440 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:36:54.952452 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:36:54.952548 | orchestrator | 2026-03-25 02:36:54.952573 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-25 02:36:54.952595 | orchestrator | Wednesday 25 March 2026 02:36:11 +0000 (0:00:00.989) 0:05:32.712 ******* 2026-03-25 02:36:54.952616 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:36:54.952629 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:36:54.952641 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:36:54.952653 | orchestrator | 2026-03-25 02:36:54.952666 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-25 02:36:54.952681 | orchestrator | Wednesday 25 March 2026 02:36:12 +0000 (0:00:00.939) 0:05:33.651 ******* 2026-03-25 02:36:54.952700 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:36:54.952716 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:36:54.952731 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:36:54.952748 | orchestrator | 2026-03-25 02:36:54.952764 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-25 02:36:54.952779 | orchestrator | Wednesday 25 March 2026 02:36:13 +0000 (0:00:00.847) 0:05:34.498 ******* 2026-03-25 02:36:54.952798 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:36:54.952814 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:36:54.952830 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:36:54.952846 | orchestrator | 2026-03-25 02:36:54.952862 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-25 02:36:54.952878 | orchestrator | Wednesday 25 March 2026 02:36:23 +0000 (0:00:09.803) 0:05:44.302 ******* 2026-03-25 02:36:54.952893 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:36:54.952910 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:36:54.952927 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:36:54.952942 | orchestrator | 2026-03-25 02:36:54.952958 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-25 02:36:54.952975 | orchestrator | Wednesday 25 March 2026 02:36:24 +0000 (0:00:01.283) 0:05:45.585 ******* 2026-03-25 02:36:54.952991 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:36:54.953010 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:36:54.953028 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:36:54.953046 | orchestrator | 2026-03-25 02:36:54.953066 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-25 02:36:54.953085 | orchestrator | Wednesday 25 March 2026 02:36:35 +0000 (0:00:11.227) 0:05:56.813 ******* 2026-03-25 02:36:54.953102 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:36:54.953119 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:36:54.953131 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:36:54.953140 | orchestrator | 2026-03-25 02:36:54.953149 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-25 02:36:54.953159 | orchestrator | Wednesday 25 March 2026 02:36:40 +0000 (0:00:04.685) 0:06:01.498 ******* 2026-03-25 02:36:54.953169 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:36:54.953178 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:36:54.953187 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:36:54.953204 | orchestrator | 2026-03-25 02:36:54.953220 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-25 02:36:54.953236 | orchestrator | Wednesday 25 March 2026 02:36:45 +0000 (0:00:04.555) 0:06:06.053 ******* 2026-03-25 02:36:54.953271 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:54.953288 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:54.953305 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.953319 | orchestrator | 2026-03-25 02:36:54.953335 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-25 02:36:54.953352 | orchestrator | Wednesday 25 March 2026 02:36:45 +0000 (0:00:00.745) 0:06:06.799 ******* 2026-03-25 02:36:54.953369 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:54.953386 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:54.953401 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.953411 | orchestrator | 2026-03-25 02:36:54.953445 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-25 02:36:54.953456 | orchestrator | Wednesday 25 March 2026 02:36:46 +0000 (0:00:00.380) 0:06:07.179 ******* 2026-03-25 02:36:54.953466 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:54.953507 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:54.953517 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.953527 | orchestrator | 2026-03-25 02:36:54.953536 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-25 02:36:54.953546 | orchestrator | Wednesday 25 March 2026 02:36:46 +0000 (0:00:00.408) 0:06:07.588 ******* 2026-03-25 02:36:54.953556 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:54.953565 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:54.953575 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.953584 | orchestrator | 2026-03-25 02:36:54.953594 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-25 02:36:54.953603 | orchestrator | Wednesday 25 March 2026 02:36:47 +0000 (0:00:00.409) 0:06:07.998 ******* 2026-03-25 02:36:54.953613 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:54.953635 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:54.953645 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.953654 | orchestrator | 2026-03-25 02:36:54.953667 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-25 02:36:54.953684 | orchestrator | Wednesday 25 March 2026 02:36:47 +0000 (0:00:00.732) 0:06:08.731 ******* 2026-03-25 02:36:54.953700 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:36:54.953716 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:36:54.953731 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:36:54.953748 | orchestrator | 2026-03-25 02:36:54.953765 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-25 02:36:54.953782 | orchestrator | Wednesday 25 March 2026 02:36:48 +0000 (0:00:00.402) 0:06:09.133 ******* 2026-03-25 02:36:54.953799 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:36:54.953834 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:36:54.953852 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:36:54.953868 | orchestrator | 2026-03-25 02:36:54.953887 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-25 02:36:54.953905 | orchestrator | Wednesday 25 March 2026 02:36:53 +0000 (0:00:04.796) 0:06:13.930 ******* 2026-03-25 02:36:54.953917 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:36:54.953927 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:36:54.953936 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:36:54.953946 | orchestrator | 2026-03-25 02:36:54.953961 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:36:54.953979 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-25 02:36:54.953997 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-25 02:36:54.954105 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-25 02:36:54.954126 | orchestrator | 2026-03-25 02:36:54.954148 | orchestrator | 2026-03-25 02:36:54.954164 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:36:54.954180 | orchestrator | Wednesday 25 March 2026 02:36:53 +0000 (0:00:00.843) 0:06:14.774 ******* 2026-03-25 02:36:54.954196 | orchestrator | =============================================================================== 2026-03-25 02:36:54.954212 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.23s 2026-03-25 02:36:54.954227 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.80s 2026-03-25 02:36:54.954244 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.77s 2026-03-25 02:36:54.954260 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.53s 2026-03-25 02:36:54.954277 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.80s 2026-03-25 02:36:54.954292 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.69s 2026-03-25 02:36:54.954309 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.60s 2026-03-25 02:36:54.954325 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.56s 2026-03-25 02:36:54.954341 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.43s 2026-03-25 02:36:54.954355 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.36s 2026-03-25 02:36:54.954364 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.28s 2026-03-25 02:36:54.954374 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.23s 2026-03-25 02:36:54.954383 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.03s 2026-03-25 02:36:54.954393 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.97s 2026-03-25 02:36:54.954402 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.91s 2026-03-25 02:36:54.954411 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.77s 2026-03-25 02:36:54.954421 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.66s 2026-03-25 02:36:54.954431 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.61s 2026-03-25 02:36:54.954440 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.59s 2026-03-25 02:36:54.954450 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.54s 2026-03-25 02:36:57.713602 | orchestrator | 2026-03-25 02:36:57 | INFO  | Task 3764672d-9517-4c05-97ee-a18e17844a98 (opensearch) was prepared for execution. 2026-03-25 02:36:57.713710 | orchestrator | 2026-03-25 02:36:57 | INFO  | It takes a moment until task 3764672d-9517-4c05-97ee-a18e17844a98 (opensearch) has been started and output is visible here. 2026-03-25 02:37:09.293601 | orchestrator | 2026-03-25 02:37:09.293685 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 02:37:09.293691 | orchestrator | 2026-03-25 02:37:09.293696 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 02:37:09.293701 | orchestrator | Wednesday 25 March 2026 02:37:02 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-03-25 02:37:09.293705 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:37:09.293710 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:37:09.293714 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:37:09.293718 | orchestrator | 2026-03-25 02:37:09.293721 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 02:37:09.293728 | orchestrator | Wednesday 25 March 2026 02:37:02 +0000 (0:00:00.337) 0:00:00.614 ******* 2026-03-25 02:37:09.293749 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-25 02:37:09.293757 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-25 02:37:09.293763 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-25 02:37:09.293769 | orchestrator | 2026-03-25 02:37:09.293775 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-25 02:37:09.293801 | orchestrator | 2026-03-25 02:37:09.293808 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-25 02:37:09.293814 | orchestrator | Wednesday 25 March 2026 02:37:03 +0000 (0:00:00.499) 0:00:01.113 ******* 2026-03-25 02:37:09.293820 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:37:09.293826 | orchestrator | 2026-03-25 02:37:09.293833 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-25 02:37:09.293839 | orchestrator | Wednesday 25 March 2026 02:37:03 +0000 (0:00:00.555) 0:00:01.669 ******* 2026-03-25 02:37:09.293846 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-25 02:37:09.293852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-25 02:37:09.293860 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-25 02:37:09.293866 | orchestrator | 2026-03-25 02:37:09.293873 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-25 02:37:09.293880 | orchestrator | Wednesday 25 March 2026 02:37:04 +0000 (0:00:00.682) 0:00:02.352 ******* 2026-03-25 02:37:09.293887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:09.293894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:09.293910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:09.293924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:37:09.293941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:37:09.293951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:37:09.293957 | orchestrator | 2026-03-25 02:37:09.293963 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-25 02:37:09.293969 | orchestrator | Wednesday 25 March 2026 02:37:06 +0000 (0:00:01.736) 0:00:04.089 ******* 2026-03-25 02:37:09.293975 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:37:09.293981 | orchestrator | 2026-03-25 02:37:09.293987 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-25 02:37:09.293992 | orchestrator | Wednesday 25 March 2026 02:37:06 +0000 (0:00:00.564) 0:00:04.654 ******* 2026-03-25 02:37:09.294009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:10.165299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:10.165391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:10.165405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:37:10.165416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:37:10.165528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:37:10.165541 | orchestrator | 2026-03-25 02:37:10.165551 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-25 02:37:10.165561 | orchestrator | Wednesday 25 March 2026 02:37:09 +0000 (0:00:02.400) 0:00:07.054 ******* 2026-03-25 02:37:10.165571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-25 02:37:10.165580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-25 02:37:10.165589 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:37:10.165599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-25 02:37:10.165628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-25 02:37:11.335827 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:37:11.335929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-25 02:37:11.335950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-25 02:37:11.335978 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:37:11.335990 | orchestrator | 2026-03-25 02:37:11.336002 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-25 02:37:11.336014 | orchestrator | Wednesday 25 March 2026 02:37:10 +0000 (0:00:00.873) 0:00:07.928 ******* 2026-03-25 02:37:11.336052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-25 02:37:11.336078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-25 02:37:11.336100 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:37:11.336107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-25 02:37:11.336115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-25 02:37:11.336122 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:37:11.336134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-25 02:37:11.336145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-25 02:37:11.336152 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:37:11.336158 | orchestrator | 2026-03-25 02:37:11.336165 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-25 02:37:11.336177 | orchestrator | Wednesday 25 March 2026 02:37:11 +0000 (0:00:01.163) 0:00:09.091 ******* 2026-03-25 02:37:19.565942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:19.566229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:19.566261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:19.566337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:37:19.566391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:37:19.566416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:37:19.566454 | orchestrator | 2026-03-25 02:37:19.566522 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-25 02:37:19.566545 | orchestrator | Wednesday 25 March 2026 02:37:13 +0000 (0:00:02.292) 0:00:11.383 ******* 2026-03-25 02:37:19.566565 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:37:19.566582 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:37:19.566594 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:37:19.566604 | orchestrator | 2026-03-25 02:37:19.566615 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-25 02:37:19.566626 | orchestrator | Wednesday 25 March 2026 02:37:16 +0000 (0:00:02.481) 0:00:13.865 ******* 2026-03-25 02:37:19.566637 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:37:19.566648 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:37:19.566658 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:37:19.566669 | orchestrator | 2026-03-25 02:37:19.566680 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-25 02:37:19.566690 | orchestrator | Wednesday 25 March 2026 02:37:17 +0000 (0:00:01.832) 0:00:15.697 ******* 2026-03-25 02:37:19.566702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:19.566723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:37:19.566745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-25 02:39:55.106192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:39:55.106316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:39:55.106340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-25 02:39:55.106349 | orchestrator | 2026-03-25 02:39:55.106357 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-25 02:39:55.106366 | orchestrator | Wednesday 25 March 2026 02:37:19 +0000 (0:00:01.628) 0:00:17.326 ******* 2026-03-25 02:39:55.106372 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:39:55.106380 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:39:55.106387 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:39:55.106393 | orchestrator | 2026-03-25 02:39:55.106401 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-25 02:39:55.106407 | orchestrator | Wednesday 25 March 2026 02:37:19 +0000 (0:00:00.318) 0:00:17.644 ******* 2026-03-25 02:39:55.106414 | orchestrator | 2026-03-25 02:39:55.106419 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-25 02:39:55.106425 | orchestrator | Wednesday 25 March 2026 02:37:19 +0000 (0:00:00.064) 0:00:17.709 ******* 2026-03-25 02:39:55.106431 | orchestrator | 2026-03-25 02:39:55.106438 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-25 02:39:55.106452 | orchestrator | Wednesday 25 March 2026 02:37:20 +0000 (0:00:00.074) 0:00:17.784 ******* 2026-03-25 02:39:55.106458 | orchestrator | 2026-03-25 02:39:55.106465 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-25 02:39:55.106525 | orchestrator | Wednesday 25 March 2026 02:37:20 +0000 (0:00:00.081) 0:00:17.865 ******* 2026-03-25 02:39:55.106533 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:39:55.106540 | orchestrator | 2026-03-25 02:39:55.106546 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-25 02:39:55.106553 | orchestrator | Wednesday 25 March 2026 02:37:20 +0000 (0:00:00.198) 0:00:18.064 ******* 2026-03-25 02:39:55.106559 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:39:55.106566 | orchestrator | 2026-03-25 02:39:55.106573 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-25 02:39:55.106579 | orchestrator | Wednesday 25 March 2026 02:37:20 +0000 (0:00:00.686) 0:00:18.750 ******* 2026-03-25 02:39:55.106586 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:39:55.106593 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:39:55.106599 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:39:55.106606 | orchestrator | 2026-03-25 02:39:55.106613 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-25 02:39:55.106619 | orchestrator | Wednesday 25 March 2026 02:38:28 +0000 (0:01:07.583) 0:01:26.334 ******* 2026-03-25 02:39:55.106626 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:39:55.106633 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:39:55.106639 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:39:55.106646 | orchestrator | 2026-03-25 02:39:55.106653 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-25 02:39:55.106659 | orchestrator | Wednesday 25 March 2026 02:39:44 +0000 (0:01:16.011) 0:02:42.346 ******* 2026-03-25 02:39:55.106667 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:39:55.106673 | orchestrator | 2026-03-25 02:39:55.106680 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-25 02:39:55.106687 | orchestrator | Wednesday 25 March 2026 02:39:45 +0000 (0:00:00.562) 0:02:42.909 ******* 2026-03-25 02:39:55.106694 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:39:55.106701 | orchestrator | 2026-03-25 02:39:55.106708 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-25 02:39:55.106714 | orchestrator | Wednesday 25 March 2026 02:39:47 +0000 (0:00:02.775) 0:02:45.685 ******* 2026-03-25 02:39:55.106721 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:39:55.106728 | orchestrator | 2026-03-25 02:39:55.106735 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-25 02:39:55.106741 | orchestrator | Wednesday 25 March 2026 02:39:50 +0000 (0:00:02.107) 0:02:47.792 ******* 2026-03-25 02:39:55.106748 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:39:55.106754 | orchestrator | 2026-03-25 02:39:55.106761 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-25 02:39:55.106768 | orchestrator | Wednesday 25 March 2026 02:39:52 +0000 (0:00:02.566) 0:02:50.359 ******* 2026-03-25 02:39:55.106775 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:39:55.106782 | orchestrator | 2026-03-25 02:39:55.106789 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:39:55.106797 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 02:39:55.106805 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 02:39:55.106816 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 02:39:55.106824 | orchestrator | 2026-03-25 02:39:55.106830 | orchestrator | 2026-03-25 02:39:55.106843 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:39:55.106850 | orchestrator | Wednesday 25 March 2026 02:39:55 +0000 (0:00:02.490) 0:02:52.850 ******* 2026-03-25 02:39:55.106857 | orchestrator | =============================================================================== 2026-03-25 02:39:55.106864 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 76.01s 2026-03-25 02:39:55.106871 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.58s 2026-03-25 02:39:55.106878 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.78s 2026-03-25 02:39:55.106884 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.57s 2026-03-25 02:39:55.106891 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.49s 2026-03-25 02:39:55.106898 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.48s 2026-03-25 02:39:55.106905 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.40s 2026-03-25 02:39:55.106912 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.29s 2026-03-25 02:39:55.106918 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.11s 2026-03-25 02:39:55.106925 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.83s 2026-03-25 02:39:55.106932 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.74s 2026-03-25 02:39:55.106939 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.63s 2026-03-25 02:39:55.106946 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.16s 2026-03-25 02:39:55.106953 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.87s 2026-03-25 02:39:55.106959 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.69s 2026-03-25 02:39:55.106966 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2026-03-25 02:39:55.106978 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-03-25 02:39:55.520345 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-03-25 02:39:55.520434 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-03-25 02:39:55.520445 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-03-25 02:39:58.169093 | orchestrator | 2026-03-25 02:39:58 | INFO  | Task b2ca1f59-eac2-44e5-874b-7b18ea826ec4 (memcached) was prepared for execution. 2026-03-25 02:39:58.169218 | orchestrator | 2026-03-25 02:39:58 | INFO  | It takes a moment until task b2ca1f59-eac2-44e5-874b-7b18ea826ec4 (memcached) has been started and output is visible here. 2026-03-25 02:40:15.969842 | orchestrator | 2026-03-25 02:40:15.969954 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 02:40:15.969963 | orchestrator | 2026-03-25 02:40:15.969968 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 02:40:15.969973 | orchestrator | Wednesday 25 March 2026 02:40:02 +0000 (0:00:00.290) 0:00:00.290 ******* 2026-03-25 02:40:15.970002 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:40:15.970008 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:40:15.970013 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:40:15.970044 | orchestrator | 2026-03-25 02:40:15.970049 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 02:40:15.970054 | orchestrator | Wednesday 25 March 2026 02:40:03 +0000 (0:00:00.319) 0:00:00.609 ******* 2026-03-25 02:40:15.970062 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-25 02:40:15.970072 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-25 02:40:15.970081 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-25 02:40:15.970087 | orchestrator | 2026-03-25 02:40:15.970094 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-25 02:40:15.970122 | orchestrator | 2026-03-25 02:40:15.970129 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-25 02:40:15.970135 | orchestrator | Wednesday 25 March 2026 02:40:03 +0000 (0:00:00.460) 0:00:01.069 ******* 2026-03-25 02:40:15.970142 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:40:15.970150 | orchestrator | 2026-03-25 02:40:15.970157 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-25 02:40:15.970163 | orchestrator | Wednesday 25 March 2026 02:40:04 +0000 (0:00:00.550) 0:00:01.619 ******* 2026-03-25 02:40:15.970170 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-25 02:40:15.970177 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-25 02:40:15.970183 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-25 02:40:15.970189 | orchestrator | 2026-03-25 02:40:15.970195 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-25 02:40:15.970202 | orchestrator | Wednesday 25 March 2026 02:40:04 +0000 (0:00:00.649) 0:00:02.269 ******* 2026-03-25 02:40:15.970209 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-25 02:40:15.970216 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-25 02:40:15.970222 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-25 02:40:15.970229 | orchestrator | 2026-03-25 02:40:15.970234 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-25 02:40:15.970240 | orchestrator | Wednesday 25 March 2026 02:40:06 +0000 (0:00:01.818) 0:00:04.088 ******* 2026-03-25 02:40:15.970264 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:40:15.970273 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:40:15.970279 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:40:15.970285 | orchestrator | 2026-03-25 02:40:15.970290 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-25 02:40:15.970297 | orchestrator | Wednesday 25 March 2026 02:40:08 +0000 (0:00:01.566) 0:00:05.655 ******* 2026-03-25 02:40:15.970303 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:40:15.970309 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:40:15.970315 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:40:15.970320 | orchestrator | 2026-03-25 02:40:15.970325 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:40:15.970331 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:40:15.970338 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:40:15.970343 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:40:15.970349 | orchestrator | 2026-03-25 02:40:15.970354 | orchestrator | 2026-03-25 02:40:15.970360 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:40:15.970366 | orchestrator | Wednesday 25 March 2026 02:40:15 +0000 (0:00:07.089) 0:00:12.744 ******* 2026-03-25 02:40:15.970372 | orchestrator | =============================================================================== 2026-03-25 02:40:15.970377 | orchestrator | memcached : Restart memcached container --------------------------------- 7.09s 2026-03-25 02:40:15.970384 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.82s 2026-03-25 02:40:15.970389 | orchestrator | memcached : Check memcached container ----------------------------------- 1.57s 2026-03-25 02:40:15.970395 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.65s 2026-03-25 02:40:15.970401 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.55s 2026-03-25 02:40:15.970407 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-03-25 02:40:15.970414 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-25 02:40:18.830409 | orchestrator | 2026-03-25 02:40:18 | INFO  | Task 4353fa02-561b-47da-b400-b273b24e70c5 (redis) was prepared for execution. 2026-03-25 02:40:18.830586 | orchestrator | 2026-03-25 02:40:18 | INFO  | It takes a moment until task 4353fa02-561b-47da-b400-b273b24e70c5 (redis) has been started and output is visible here. 2026-03-25 02:40:28.468140 | orchestrator | 2026-03-25 02:40:28.468260 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 02:40:28.468298 | orchestrator | 2026-03-25 02:40:28.468311 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 02:40:28.468322 | orchestrator | Wednesday 25 March 2026 02:40:23 +0000 (0:00:00.273) 0:00:00.273 ******* 2026-03-25 02:40:28.468333 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:40:28.468345 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:40:28.468356 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:40:28.468367 | orchestrator | 2026-03-25 02:40:28.468378 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 02:40:28.468389 | orchestrator | Wednesday 25 March 2026 02:40:23 +0000 (0:00:00.330) 0:00:00.604 ******* 2026-03-25 02:40:28.468400 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-25 02:40:28.468411 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-25 02:40:28.468422 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-25 02:40:28.468433 | orchestrator | 2026-03-25 02:40:28.468443 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-25 02:40:28.468454 | orchestrator | 2026-03-25 02:40:28.468465 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-25 02:40:28.468526 | orchestrator | Wednesday 25 March 2026 02:40:24 +0000 (0:00:00.499) 0:00:01.103 ******* 2026-03-25 02:40:28.468538 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:40:28.468554 | orchestrator | 2026-03-25 02:40:28.468574 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-25 02:40:28.468603 | orchestrator | Wednesday 25 March 2026 02:40:24 +0000 (0:00:00.547) 0:00:01.650 ******* 2026-03-25 02:40:28.468629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.468654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.468674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.468729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.468780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.468805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.468825 | orchestrator | 2026-03-25 02:40:28.468842 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-25 02:40:28.468855 | orchestrator | Wednesday 25 March 2026 02:40:25 +0000 (0:00:01.096) 0:00:02.747 ******* 2026-03-25 02:40:28.468867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.468986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.469010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.469036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:28.469058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.463910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464007 | orchestrator | 2026-03-25 02:40:32.464020 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-25 02:40:32.464032 | orchestrator | Wednesday 25 March 2026 02:40:28 +0000 (0:00:02.468) 0:00:05.215 ******* 2026-03-25 02:40:32.464043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464210 | orchestrator | 2026-03-25 02:40:32.464225 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-25 02:40:32.464239 | orchestrator | Wednesday 25 March 2026 02:40:30 +0000 (0:00:02.338) 0:00:07.554 ******* 2026-03-25 02:40:32.464255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:32.464360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 02:40:43.916378 | orchestrator | 2026-03-25 02:40:43.916543 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-25 02:40:43.916558 | orchestrator | Wednesday 25 March 2026 02:40:32 +0000 (0:00:01.419) 0:00:08.974 ******* 2026-03-25 02:40:43.916567 | orchestrator | 2026-03-25 02:40:43.916575 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-25 02:40:43.916583 | orchestrator | Wednesday 25 March 2026 02:40:32 +0000 (0:00:00.084) 0:00:09.058 ******* 2026-03-25 02:40:43.916591 | orchestrator | 2026-03-25 02:40:43.916609 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-25 02:40:43.916618 | orchestrator | Wednesday 25 March 2026 02:40:32 +0000 (0:00:00.067) 0:00:09.126 ******* 2026-03-25 02:40:43.916625 | orchestrator | 2026-03-25 02:40:43.916634 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-25 02:40:43.916642 | orchestrator | Wednesday 25 March 2026 02:40:32 +0000 (0:00:00.089) 0:00:09.215 ******* 2026-03-25 02:40:43.916650 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:40:43.916659 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:40:43.916666 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:40:43.916674 | orchestrator | 2026-03-25 02:40:43.916682 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-25 02:40:43.916690 | orchestrator | Wednesday 25 March 2026 02:40:35 +0000 (0:00:02.948) 0:00:12.163 ******* 2026-03-25 02:40:43.916722 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:40:43.916730 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:40:43.916738 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:40:43.916746 | orchestrator | 2026-03-25 02:40:43.916754 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:40:43.916763 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:40:43.916773 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:40:43.916794 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:40:43.916802 | orchestrator | 2026-03-25 02:40:43.916810 | orchestrator | 2026-03-25 02:40:43.916818 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:40:43.916828 | orchestrator | Wednesday 25 March 2026 02:40:43 +0000 (0:00:08.102) 0:00:20.266 ******* 2026-03-25 02:40:43.916841 | orchestrator | =============================================================================== 2026-03-25 02:40:43.916854 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.10s 2026-03-25 02:40:43.916867 | orchestrator | redis : Restart redis container ----------------------------------------- 2.95s 2026-03-25 02:40:43.916882 | orchestrator | redis : Copying over default config.json files -------------------------- 2.47s 2026-03-25 02:40:43.916903 | orchestrator | redis : Copying over redis config files --------------------------------- 2.34s 2026-03-25 02:40:43.916916 | orchestrator | redis : Check redis containers ------------------------------------------ 1.42s 2026-03-25 02:40:43.916929 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.10s 2026-03-25 02:40:43.916943 | orchestrator | redis : include_tasks --------------------------------------------------- 0.55s 2026-03-25 02:40:43.916956 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-03-25 02:40:43.916969 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-25 02:40:43.916982 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-03-25 02:40:46.575723 | orchestrator | 2026-03-25 02:40:46 | INFO  | Task cc730cec-595f-4a60-b8fe-49b00b53582a (mariadb) was prepared for execution. 2026-03-25 02:40:46.575804 | orchestrator | 2026-03-25 02:40:46 | INFO  | It takes a moment until task cc730cec-595f-4a60-b8fe-49b00b53582a (mariadb) has been started and output is visible here. 2026-03-25 02:41:01.303933 | orchestrator | 2026-03-25 02:41:01.304031 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 02:41:01.304040 | orchestrator | 2026-03-25 02:41:01.304044 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 02:41:01.304049 | orchestrator | Wednesday 25 March 2026 02:40:51 +0000 (0:00:00.235) 0:00:00.235 ******* 2026-03-25 02:41:01.304053 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:41:01.304058 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:41:01.304072 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:41:01.304076 | orchestrator | 2026-03-25 02:41:01.304086 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 02:41:01.304091 | orchestrator | Wednesday 25 March 2026 02:40:51 +0000 (0:00:00.327) 0:00:00.562 ******* 2026-03-25 02:41:01.304095 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-25 02:41:01.304100 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-25 02:41:01.304104 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-25 02:41:01.304107 | orchestrator | 2026-03-25 02:41:01.304111 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-25 02:41:01.304115 | orchestrator | 2026-03-25 02:41:01.304119 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-25 02:41:01.304138 | orchestrator | Wednesday 25 March 2026 02:40:52 +0000 (0:00:00.598) 0:00:01.161 ******* 2026-03-25 02:41:01.304142 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 02:41:01.304146 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-25 02:41:01.304150 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-25 02:41:01.304154 | orchestrator | 2026-03-25 02:41:01.304158 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-25 02:41:01.304162 | orchestrator | Wednesday 25 March 2026 02:40:52 +0000 (0:00:00.410) 0:00:01.571 ******* 2026-03-25 02:41:01.304166 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:41:01.304171 | orchestrator | 2026-03-25 02:41:01.304175 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-25 02:41:01.304179 | orchestrator | Wednesday 25 March 2026 02:40:53 +0000 (0:00:00.564) 0:00:02.135 ******* 2026-03-25 02:41:01.304197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 02:41:01.304215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 02:41:01.304228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 02:41:01.304232 | orchestrator | 2026-03-25 02:41:01.304236 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-25 02:41:01.304240 | orchestrator | Wednesday 25 March 2026 02:40:55 +0000 (0:00:02.774) 0:00:04.910 ******* 2026-03-25 02:41:01.304244 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:41:01.304249 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:41:01.304253 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:41:01.304256 | orchestrator | 2026-03-25 02:41:01.304260 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-25 02:41:01.304264 | orchestrator | Wednesday 25 March 2026 02:40:56 +0000 (0:00:00.690) 0:00:05.600 ******* 2026-03-25 02:41:01.304267 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:41:01.304271 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:41:01.304275 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:41:01.304278 | orchestrator | 2026-03-25 02:41:01.304282 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-25 02:41:01.304286 | orchestrator | Wednesday 25 March 2026 02:40:58 +0000 (0:00:01.469) 0:00:07.070 ******* 2026-03-25 02:41:01.304294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 02:41:09.359826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 02:41:09.359971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 02:41:09.360034 | orchestrator | 2026-03-25 02:41:09.360059 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-25 02:41:09.360080 | orchestrator | Wednesday 25 March 2026 02:41:01 +0000 (0:00:03.231) 0:00:10.301 ******* 2026-03-25 02:41:09.360099 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:41:09.360117 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:41:09.360135 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:41:09.360152 | orchestrator | 2026-03-25 02:41:09.360170 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-25 02:41:09.360211 | orchestrator | Wednesday 25 March 2026 02:41:02 +0000 (0:00:01.044) 0:00:11.346 ******* 2026-03-25 02:41:09.360232 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:41:09.360251 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:41:09.360270 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:41:09.360288 | orchestrator | 2026-03-25 02:41:09.360310 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-25 02:41:09.360332 | orchestrator | Wednesday 25 March 2026 02:41:06 +0000 (0:00:03.961) 0:00:15.308 ******* 2026-03-25 02:41:09.360355 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:41:09.360380 | orchestrator | 2026-03-25 02:41:09.360401 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-25 02:41:09.360421 | orchestrator | Wednesday 25 March 2026 02:41:06 +0000 (0:00:00.579) 0:00:15.887 ******* 2026-03-25 02:41:09.360455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:41:09.360572 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:41:09.360615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:41:14.457794 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:41:14.457915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:41:14.457955 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:41:14.457961 | orchestrator | 2026-03-25 02:41:14.457967 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-25 02:41:14.457972 | orchestrator | Wednesday 25 March 2026 02:41:09 +0000 (0:00:02.468) 0:00:18.355 ******* 2026-03-25 02:41:14.457978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:41:14.457983 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:41:14.458005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:41:14.458050 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:41:14.458056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:41:14.458061 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:41:14.458066 | orchestrator | 2026-03-25 02:41:14.458071 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-25 02:41:14.458075 | orchestrator | Wednesday 25 March 2026 02:41:11 +0000 (0:00:02.600) 0:00:20.956 ******* 2026-03-25 02:41:14.458089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:41:17.462097 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:41:17.462217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:41:17.462244 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:41:17.462280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 02:41:17.462326 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:41:17.462341 | orchestrator | 2026-03-25 02:41:17.462359 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-25 02:41:17.462370 | orchestrator | Wednesday 25 March 2026 02:41:14 +0000 (0:00:02.503) 0:00:23.459 ******* 2026-03-25 02:41:17.462399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 02:41:17.462411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 02:41:17.462435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 02:43:36.751231 | orchestrator | 2026-03-25 02:43:36.751343 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-25 02:43:36.751357 | orchestrator | Wednesday 25 March 2026 02:41:17 +0000 (0:00:03.002) 0:00:26.462 ******* 2026-03-25 02:43:36.751364 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:43:36.751372 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:43:36.751378 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:43:36.751385 | orchestrator | 2026-03-25 02:43:36.751392 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-25 02:43:36.751398 | orchestrator | Wednesday 25 March 2026 02:41:18 +0000 (0:00:00.828) 0:00:27.291 ******* 2026-03-25 02:43:36.751405 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:36.751413 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:43:36.751420 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:43:36.751427 | orchestrator | 2026-03-25 02:43:36.751434 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-25 02:43:36.751441 | orchestrator | Wednesday 25 March 2026 02:41:18 +0000 (0:00:00.642) 0:00:27.934 ******* 2026-03-25 02:43:36.751448 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:36.751455 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:43:36.751462 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:43:36.751469 | orchestrator | 2026-03-25 02:43:36.751529 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-25 02:43:36.751538 | orchestrator | Wednesday 25 March 2026 02:41:19 +0000 (0:00:00.363) 0:00:28.297 ******* 2026-03-25 02:43:36.751548 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-25 02:43:36.751557 | orchestrator | ...ignoring 2026-03-25 02:43:36.751565 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-25 02:43:36.751572 | orchestrator | ...ignoring 2026-03-25 02:43:36.751580 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-25 02:43:36.751587 | orchestrator | ...ignoring 2026-03-25 02:43:36.751619 | orchestrator | 2026-03-25 02:43:36.751627 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-25 02:43:36.751635 | orchestrator | Wednesday 25 March 2026 02:41:30 +0000 (0:00:10.865) 0:00:39.163 ******* 2026-03-25 02:43:36.751643 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:36.751650 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:43:36.751658 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:43:36.751666 | orchestrator | 2026-03-25 02:43:36.751673 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-25 02:43:36.751681 | orchestrator | Wednesday 25 March 2026 02:41:30 +0000 (0:00:00.498) 0:00:39.662 ******* 2026-03-25 02:43:36.751689 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:43:36.751697 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:36.751704 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:36.751712 | orchestrator | 2026-03-25 02:43:36.751720 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-25 02:43:36.751728 | orchestrator | Wednesday 25 March 2026 02:41:31 +0000 (0:00:00.709) 0:00:40.371 ******* 2026-03-25 02:43:36.751736 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:43:36.751743 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:36.751750 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:36.751758 | orchestrator | 2026-03-25 02:43:36.751780 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-25 02:43:36.751790 | orchestrator | Wednesday 25 March 2026 02:41:31 +0000 (0:00:00.476) 0:00:40.848 ******* 2026-03-25 02:43:36.751798 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:43:36.751806 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:36.751815 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:36.751823 | orchestrator | 2026-03-25 02:43:36.751832 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-25 02:43:36.751840 | orchestrator | Wednesday 25 March 2026 02:41:32 +0000 (0:00:00.455) 0:00:41.303 ******* 2026-03-25 02:43:36.751848 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:36.751857 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:43:36.751865 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:43:36.751873 | orchestrator | 2026-03-25 02:43:36.751882 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-25 02:43:36.751892 | orchestrator | Wednesday 25 March 2026 02:41:32 +0000 (0:00:00.446) 0:00:41.750 ******* 2026-03-25 02:43:36.751901 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:43:36.751909 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:36.751917 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:36.751925 | orchestrator | 2026-03-25 02:43:36.751934 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-25 02:43:36.751943 | orchestrator | Wednesday 25 March 2026 02:41:33 +0000 (0:00:00.947) 0:00:42.698 ******* 2026-03-25 02:43:36.751950 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:36.751958 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:36.751966 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-25 02:43:36.751974 | orchestrator | 2026-03-25 02:43:36.751981 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-25 02:43:36.751989 | orchestrator | Wednesday 25 March 2026 02:41:34 +0000 (0:00:00.469) 0:00:43.168 ******* 2026-03-25 02:43:36.751996 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:43:36.752004 | orchestrator | 2026-03-25 02:43:36.752010 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-25 02:43:36.752017 | orchestrator | Wednesday 25 March 2026 02:41:44 +0000 (0:00:10.196) 0:00:53.364 ******* 2026-03-25 02:43:36.752024 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:36.752031 | orchestrator | 2026-03-25 02:43:36.752038 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-25 02:43:36.752045 | orchestrator | Wednesday 25 March 2026 02:41:44 +0000 (0:00:00.120) 0:00:53.485 ******* 2026-03-25 02:43:36.752051 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:43:36.752083 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:36.752090 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:36.752096 | orchestrator | 2026-03-25 02:43:36.752102 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-25 02:43:36.752109 | orchestrator | Wednesday 25 March 2026 02:41:45 +0000 (0:00:01.089) 0:00:54.575 ******* 2026-03-25 02:43:36.752116 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:43:36.752122 | orchestrator | 2026-03-25 02:43:36.752129 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-25 02:43:36.752135 | orchestrator | Wednesday 25 March 2026 02:41:53 +0000 (0:00:08.333) 0:01:02.908 ******* 2026-03-25 02:43:36.752142 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:36.752149 | orchestrator | 2026-03-25 02:43:36.752156 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-25 02:43:36.752163 | orchestrator | Wednesday 25 March 2026 02:41:55 +0000 (0:00:01.560) 0:01:04.469 ******* 2026-03-25 02:43:36.752169 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:36.752175 | orchestrator | 2026-03-25 02:43:36.752182 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-25 02:43:36.752188 | orchestrator | Wednesday 25 March 2026 02:41:58 +0000 (0:00:02.721) 0:01:07.190 ******* 2026-03-25 02:43:36.752194 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:43:36.752201 | orchestrator | 2026-03-25 02:43:36.752209 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-25 02:43:36.752216 | orchestrator | Wednesday 25 March 2026 02:41:58 +0000 (0:00:00.131) 0:01:07.321 ******* 2026-03-25 02:43:36.752223 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:43:36.752230 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:36.752238 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:36.752244 | orchestrator | 2026-03-25 02:43:36.752250 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-25 02:43:36.752256 | orchestrator | Wednesday 25 March 2026 02:41:58 +0000 (0:00:00.340) 0:01:07.662 ******* 2026-03-25 02:43:36.752263 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:43:36.752270 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-25 02:43:36.752277 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:43:36.752284 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:43:36.752291 | orchestrator | 2026-03-25 02:43:36.752299 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-25 02:43:36.752306 | orchestrator | skipping: no hosts matched 2026-03-25 02:43:36.752313 | orchestrator | 2026-03-25 02:43:36.752320 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-25 02:43:36.752326 | orchestrator | 2026-03-25 02:43:36.752332 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-25 02:43:36.752339 | orchestrator | Wednesday 25 March 2026 02:41:59 +0000 (0:00:00.600) 0:01:08.262 ******* 2026-03-25 02:43:36.752345 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:43:36.752351 | orchestrator | 2026-03-25 02:43:36.752358 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-25 02:43:36.752365 | orchestrator | Wednesday 25 March 2026 02:42:18 +0000 (0:00:18.939) 0:01:27.201 ******* 2026-03-25 02:43:36.752372 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:43:36.752379 | orchestrator | 2026-03-25 02:43:36.752385 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-25 02:43:36.752391 | orchestrator | Wednesday 25 March 2026 02:42:34 +0000 (0:00:16.563) 0:01:43.765 ******* 2026-03-25 02:43:36.752397 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:43:36.752404 | orchestrator | 2026-03-25 02:43:36.752413 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-25 02:43:36.752420 | orchestrator | 2026-03-25 02:43:36.752434 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-25 02:43:36.752441 | orchestrator | Wednesday 25 March 2026 02:42:37 +0000 (0:00:02.551) 0:01:46.317 ******* 2026-03-25 02:43:36.752454 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:43:36.752462 | orchestrator | 2026-03-25 02:43:36.752470 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-25 02:43:36.752559 | orchestrator | Wednesday 25 March 2026 02:42:56 +0000 (0:00:19.553) 0:02:05.870 ******* 2026-03-25 02:43:36.752566 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:43:36.752572 | orchestrator | 2026-03-25 02:43:36.752579 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-25 02:43:36.752585 | orchestrator | Wednesday 25 March 2026 02:43:12 +0000 (0:00:15.631) 0:02:21.501 ******* 2026-03-25 02:43:36.752591 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:43:36.752598 | orchestrator | 2026-03-25 02:43:36.752605 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-25 02:43:36.752612 | orchestrator | 2026-03-25 02:43:36.752618 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-25 02:43:36.752625 | orchestrator | Wednesday 25 March 2026 02:43:15 +0000 (0:00:02.756) 0:02:24.258 ******* 2026-03-25 02:43:36.752632 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:43:36.752639 | orchestrator | 2026-03-25 02:43:36.752645 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-25 02:43:36.752651 | orchestrator | Wednesday 25 March 2026 02:43:28 +0000 (0:00:13.314) 0:02:37.572 ******* 2026-03-25 02:43:36.752658 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:36.752664 | orchestrator | 2026-03-25 02:43:36.752670 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-25 02:43:36.752677 | orchestrator | Wednesday 25 March 2026 02:43:33 +0000 (0:00:04.571) 0:02:42.144 ******* 2026-03-25 02:43:36.752683 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:36.752690 | orchestrator | 2026-03-25 02:43:36.752697 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-25 02:43:36.752704 | orchestrator | 2026-03-25 02:43:36.752710 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-25 02:43:36.752716 | orchestrator | Wednesday 25 March 2026 02:43:36 +0000 (0:00:03.035) 0:02:45.179 ******* 2026-03-25 02:43:36.752723 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:43:36.752730 | orchestrator | 2026-03-25 02:43:36.752737 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-25 02:43:36.752753 | orchestrator | Wednesday 25 March 2026 02:43:36 +0000 (0:00:00.569) 0:02:45.748 ******* 2026-03-25 02:43:50.875186 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:50.875289 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:50.875300 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:43:50.875307 | orchestrator | 2026-03-25 02:43:50.875315 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-25 02:43:50.875323 | orchestrator | Wednesday 25 March 2026 02:43:39 +0000 (0:00:02.295) 0:02:48.043 ******* 2026-03-25 02:43:50.875330 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:50.875337 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:50.875343 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:43:50.875349 | orchestrator | 2026-03-25 02:43:50.875355 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-25 02:43:50.875362 | orchestrator | Wednesday 25 March 2026 02:43:41 +0000 (0:00:02.103) 0:02:50.147 ******* 2026-03-25 02:43:50.875368 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:50.875374 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:50.875380 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:43:50.875387 | orchestrator | 2026-03-25 02:43:50.875393 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-25 02:43:50.875400 | orchestrator | Wednesday 25 March 2026 02:43:43 +0000 (0:00:02.373) 0:02:52.520 ******* 2026-03-25 02:43:50.875407 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:50.875413 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:50.875419 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:43:50.875425 | orchestrator | 2026-03-25 02:43:50.875457 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-25 02:43:50.875464 | orchestrator | Wednesday 25 March 2026 02:43:45 +0000 (0:00:02.003) 0:02:54.524 ******* 2026-03-25 02:43:50.875470 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:43:50.875558 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:43:50.875566 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:43:50.875572 | orchestrator | 2026-03-25 02:43:50.875578 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-25 02:43:50.875585 | orchestrator | Wednesday 25 March 2026 02:43:49 +0000 (0:00:04.413) 0:02:58.937 ******* 2026-03-25 02:43:50.875591 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:43:50.875597 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:43:50.875604 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:43:50.875609 | orchestrator | 2026-03-25 02:43:50.875616 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:43:50.875624 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-25 02:43:50.875632 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-25 02:43:50.875639 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-25 02:43:50.875645 | orchestrator | 2026-03-25 02:43:50.875651 | orchestrator | 2026-03-25 02:43:50.875657 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:43:50.875664 | orchestrator | Wednesday 25 March 2026 02:43:50 +0000 (0:00:00.510) 0:02:59.448 ******* 2026-03-25 02:43:50.875670 | orchestrator | =============================================================================== 2026-03-25 02:43:50.875688 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.49s 2026-03-25 02:43:50.875695 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.19s 2026-03-25 02:43:50.875701 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.31s 2026-03-25 02:43:50.875707 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.87s 2026-03-25 02:43:50.875713 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.20s 2026-03-25 02:43:50.875719 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.33s 2026-03-25 02:43:50.875725 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.31s 2026-03-25 02:43:50.875732 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.57s 2026-03-25 02:43:50.875738 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 4.41s 2026-03-25 02:43:50.875745 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.96s 2026-03-25 02:43:50.875752 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.23s 2026-03-25 02:43:50.875759 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.04s 2026-03-25 02:43:50.875766 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.00s 2026-03-25 02:43:50.875772 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.77s 2026-03-25 02:43:50.875779 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.72s 2026-03-25 02:43:50.875786 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.60s 2026-03-25 02:43:50.875793 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.50s 2026-03-25 02:43:50.875800 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.47s 2026-03-25 02:43:50.875806 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.37s 2026-03-25 02:43:50.875812 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.30s 2026-03-25 02:43:53.577701 | orchestrator | 2026-03-25 02:43:53 | INFO  | Task 365e4d77-6cd8-4da1-a5b2-b280c696a2a0 (rabbitmq) was prepared for execution. 2026-03-25 02:43:53.577791 | orchestrator | 2026-03-25 02:43:53 | INFO  | It takes a moment until task 365e4d77-6cd8-4da1-a5b2-b280c696a2a0 (rabbitmq) has been started and output is visible here. 2026-03-25 02:44:07.631699 | orchestrator | 2026-03-25 02:44:07.631794 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 02:44:07.631805 | orchestrator | 2026-03-25 02:44:07.631812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 02:44:07.631820 | orchestrator | Wednesday 25 March 2026 02:43:58 +0000 (0:00:00.219) 0:00:00.219 ******* 2026-03-25 02:44:07.631827 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:44:07.631835 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:44:07.631842 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:44:07.631849 | orchestrator | 2026-03-25 02:44:07.631856 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 02:44:07.631862 | orchestrator | Wednesday 25 March 2026 02:43:58 +0000 (0:00:00.322) 0:00:00.542 ******* 2026-03-25 02:44:07.631870 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-25 02:44:07.631877 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-25 02:44:07.631883 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-25 02:44:07.631890 | orchestrator | 2026-03-25 02:44:07.631896 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-25 02:44:07.631903 | orchestrator | 2026-03-25 02:44:07.631910 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-25 02:44:07.631917 | orchestrator | Wednesday 25 March 2026 02:43:59 +0000 (0:00:00.613) 0:00:01.155 ******* 2026-03-25 02:44:07.631924 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:44:07.631932 | orchestrator | 2026-03-25 02:44:07.631939 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-25 02:44:07.631946 | orchestrator | Wednesday 25 March 2026 02:43:59 +0000 (0:00:00.576) 0:00:01.732 ******* 2026-03-25 02:44:07.631952 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:44:07.631959 | orchestrator | 2026-03-25 02:44:07.631966 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-25 02:44:07.631972 | orchestrator | Wednesday 25 March 2026 02:44:00 +0000 (0:00:01.012) 0:00:02.744 ******* 2026-03-25 02:44:07.631979 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:44:07.631986 | orchestrator | 2026-03-25 02:44:07.631993 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-25 02:44:07.631999 | orchestrator | Wednesday 25 March 2026 02:44:01 +0000 (0:00:00.409) 0:00:03.153 ******* 2026-03-25 02:44:07.632006 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:44:07.632012 | orchestrator | 2026-03-25 02:44:07.632015 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-25 02:44:07.632019 | orchestrator | Wednesday 25 March 2026 02:44:01 +0000 (0:00:00.383) 0:00:03.536 ******* 2026-03-25 02:44:07.632023 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:44:07.632027 | orchestrator | 2026-03-25 02:44:07.632030 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-25 02:44:07.632034 | orchestrator | Wednesday 25 March 2026 02:44:01 +0000 (0:00:00.411) 0:00:03.948 ******* 2026-03-25 02:44:07.632038 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:44:07.632042 | orchestrator | 2026-03-25 02:44:07.632045 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-25 02:44:07.632049 | orchestrator | Wednesday 25 March 2026 02:44:02 +0000 (0:00:00.601) 0:00:04.550 ******* 2026-03-25 02:44:07.632065 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:44:07.632092 | orchestrator | 2026-03-25 02:44:07.632099 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-25 02:44:07.632106 | orchestrator | Wednesday 25 March 2026 02:44:03 +0000 (0:00:00.959) 0:00:05.510 ******* 2026-03-25 02:44:07.632112 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:44:07.632118 | orchestrator | 2026-03-25 02:44:07.632125 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-25 02:44:07.632131 | orchestrator | Wednesday 25 March 2026 02:44:04 +0000 (0:00:00.865) 0:00:06.375 ******* 2026-03-25 02:44:07.632137 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:44:07.632143 | orchestrator | 2026-03-25 02:44:07.632150 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-25 02:44:07.632156 | orchestrator | Wednesday 25 March 2026 02:44:04 +0000 (0:00:00.388) 0:00:06.763 ******* 2026-03-25 02:44:07.632162 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:44:07.632168 | orchestrator | 2026-03-25 02:44:07.632174 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-25 02:44:07.632180 | orchestrator | Wednesday 25 March 2026 02:44:05 +0000 (0:00:00.410) 0:00:07.173 ******* 2026-03-25 02:44:07.632206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:44:07.632216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:44:07.632223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:44:07.632235 | orchestrator | 2026-03-25 02:44:07.632245 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-25 02:44:07.632252 | orchestrator | Wednesday 25 March 2026 02:44:05 +0000 (0:00:00.843) 0:00:08.017 ******* 2026-03-25 02:44:07.632259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:44:07.632272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:44:25.981545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:44:25.981649 | orchestrator | 2026-03-25 02:44:25.981663 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-25 02:44:25.981673 | orchestrator | Wednesday 25 March 2026 02:44:07 +0000 (0:00:01.635) 0:00:09.653 ******* 2026-03-25 02:44:25.981700 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-25 02:44:25.981709 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-25 02:44:25.981716 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-25 02:44:25.981723 | orchestrator | 2026-03-25 02:44:25.981730 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-25 02:44:25.981738 | orchestrator | Wednesday 25 March 2026 02:44:09 +0000 (0:00:01.483) 0:00:11.137 ******* 2026-03-25 02:44:25.981758 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-25 02:44:25.981766 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-25 02:44:25.981773 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-25 02:44:25.981780 | orchestrator | 2026-03-25 02:44:25.981787 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-25 02:44:25.981806 | orchestrator | Wednesday 25 March 2026 02:44:10 +0000 (0:00:01.709) 0:00:12.847 ******* 2026-03-25 02:44:25.981821 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-25 02:44:25.981836 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-25 02:44:25.981844 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-25 02:44:25.981858 | orchestrator | 2026-03-25 02:44:25.981865 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-25 02:44:25.981873 | orchestrator | Wednesday 25 March 2026 02:44:12 +0000 (0:00:01.331) 0:00:14.178 ******* 2026-03-25 02:44:25.981880 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-25 02:44:25.981887 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-25 02:44:25.981894 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-25 02:44:25.981901 | orchestrator | 2026-03-25 02:44:25.981908 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-25 02:44:25.981916 | orchestrator | Wednesday 25 March 2026 02:44:13 +0000 (0:00:01.710) 0:00:15.889 ******* 2026-03-25 02:44:25.981923 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-25 02:44:25.981930 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-25 02:44:25.981937 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-25 02:44:25.981944 | orchestrator | 2026-03-25 02:44:25.981951 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-25 02:44:25.981959 | orchestrator | Wednesday 25 March 2026 02:44:15 +0000 (0:00:01.420) 0:00:17.309 ******* 2026-03-25 02:44:25.981966 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-25 02:44:25.981973 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-25 02:44:25.981981 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-25 02:44:25.981988 | orchestrator | 2026-03-25 02:44:25.981995 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-25 02:44:25.982006 | orchestrator | Wednesday 25 March 2026 02:44:16 +0000 (0:00:01.395) 0:00:18.704 ******* 2026-03-25 02:44:25.982071 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:44:25.982089 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:44:25.982121 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:44:25.982146 | orchestrator | 2026-03-25 02:44:25.982155 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-25 02:44:25.982164 | orchestrator | Wednesday 25 March 2026 02:44:17 +0000 (0:00:00.438) 0:00:19.143 ******* 2026-03-25 02:44:25.982174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:44:25.982190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:44:25.982200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 02:44:25.982209 | orchestrator | 2026-03-25 02:44:25.982217 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-25 02:44:25.982226 | orchestrator | Wednesday 25 March 2026 02:44:18 +0000 (0:00:01.277) 0:00:20.420 ******* 2026-03-25 02:44:25.982234 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:44:25.982242 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:44:25.982251 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:44:25.982258 | orchestrator | 2026-03-25 02:44:25.982267 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-25 02:44:25.982280 | orchestrator | Wednesday 25 March 2026 02:44:19 +0000 (0:00:00.791) 0:00:21.211 ******* 2026-03-25 02:44:25.982289 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:44:25.982297 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:44:25.982305 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:44:25.982313 | orchestrator | 2026-03-25 02:44:25.982322 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-25 02:44:25.982335 | orchestrator | Wednesday 25 March 2026 02:44:25 +0000 (0:00:06.786) 0:00:27.997 ******* 2026-03-25 02:45:56.004911 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:45:56.005002 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:45:56.005008 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:45:56.005012 | orchestrator | 2026-03-25 02:45:56.005018 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-25 02:45:56.005023 | orchestrator | 2026-03-25 02:45:56.005027 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-25 02:45:56.005031 | orchestrator | Wednesday 25 March 2026 02:44:26 +0000 (0:00:00.566) 0:00:28.564 ******* 2026-03-25 02:45:56.005035 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:45:56.005040 | orchestrator | 2026-03-25 02:45:56.005044 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-25 02:45:56.005048 | orchestrator | Wednesday 25 March 2026 02:44:27 +0000 (0:00:00.596) 0:00:29.161 ******* 2026-03-25 02:45:56.005052 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:45:56.005056 | orchestrator | 2026-03-25 02:45:56.005059 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-25 02:45:56.005063 | orchestrator | Wednesday 25 March 2026 02:44:27 +0000 (0:00:00.243) 0:00:29.404 ******* 2026-03-25 02:45:56.005067 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:45:56.005072 | orchestrator | 2026-03-25 02:45:56.005078 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-25 02:45:56.005085 | orchestrator | Wednesday 25 March 2026 02:44:28 +0000 (0:00:01.617) 0:00:31.022 ******* 2026-03-25 02:45:56.005091 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:45:56.005100 | orchestrator | 2026-03-25 02:45:56.005110 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-25 02:45:56.005116 | orchestrator | 2026-03-25 02:45:56.005122 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-25 02:45:56.005129 | orchestrator | Wednesday 25 March 2026 02:45:21 +0000 (0:00:52.140) 0:01:23.162 ******* 2026-03-25 02:45:56.005135 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:45:56.005141 | orchestrator | 2026-03-25 02:45:56.005148 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-25 02:45:56.005155 | orchestrator | Wednesday 25 March 2026 02:45:21 +0000 (0:00:00.600) 0:01:23.762 ******* 2026-03-25 02:45:56.005161 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:45:56.005168 | orchestrator | 2026-03-25 02:45:56.005175 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-25 02:45:56.005182 | orchestrator | Wednesday 25 March 2026 02:45:21 +0000 (0:00:00.244) 0:01:24.007 ******* 2026-03-25 02:45:56.005187 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:45:56.005191 | orchestrator | 2026-03-25 02:45:56.005195 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-25 02:45:56.005212 | orchestrator | Wednesday 25 March 2026 02:45:23 +0000 (0:00:01.589) 0:01:25.597 ******* 2026-03-25 02:45:56.005216 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:45:56.005220 | orchestrator | 2026-03-25 02:45:56.005224 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-25 02:45:56.005228 | orchestrator | 2026-03-25 02:45:56.005232 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-25 02:45:56.005236 | orchestrator | Wednesday 25 March 2026 02:45:36 +0000 (0:00:13.067) 0:01:38.664 ******* 2026-03-25 02:45:56.005239 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:45:56.005243 | orchestrator | 2026-03-25 02:45:56.005264 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-25 02:45:56.005268 | orchestrator | Wednesday 25 March 2026 02:45:37 +0000 (0:00:00.773) 0:01:39.437 ******* 2026-03-25 02:45:56.005272 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:45:56.005276 | orchestrator | 2026-03-25 02:45:56.005279 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-25 02:45:56.005283 | orchestrator | Wednesday 25 March 2026 02:45:37 +0000 (0:00:00.267) 0:01:39.705 ******* 2026-03-25 02:45:56.005287 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:45:56.005291 | orchestrator | 2026-03-25 02:45:56.005295 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-25 02:45:56.005298 | orchestrator | Wednesday 25 March 2026 02:45:44 +0000 (0:00:06.579) 0:01:46.285 ******* 2026-03-25 02:45:56.005302 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:45:56.005306 | orchestrator | 2026-03-25 02:45:56.005309 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-25 02:45:56.005313 | orchestrator | 2026-03-25 02:45:56.005317 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-25 02:45:56.005320 | orchestrator | Wednesday 25 March 2026 02:45:52 +0000 (0:00:08.602) 0:01:54.887 ******* 2026-03-25 02:45:56.005324 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:45:56.005328 | orchestrator | 2026-03-25 02:45:56.005331 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-25 02:45:56.005335 | orchestrator | Wednesday 25 March 2026 02:45:53 +0000 (0:00:00.540) 0:01:55.428 ******* 2026-03-25 02:45:56.005339 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-25 02:45:56.005343 | orchestrator | enable_outward_rabbitmq_True 2026-03-25 02:45:56.005346 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-25 02:45:56.005350 | orchestrator | outward_rabbitmq_restart 2026-03-25 02:45:56.005354 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:45:56.005358 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:45:56.005361 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:45:56.005365 | orchestrator | 2026-03-25 02:45:56.005369 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-25 02:45:56.005372 | orchestrator | skipping: no hosts matched 2026-03-25 02:45:56.005376 | orchestrator | 2026-03-25 02:45:56.005380 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-25 02:45:56.005384 | orchestrator | skipping: no hosts matched 2026-03-25 02:45:56.005387 | orchestrator | 2026-03-25 02:45:56.005391 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-25 02:45:56.005395 | orchestrator | skipping: no hosts matched 2026-03-25 02:45:56.005399 | orchestrator | 2026-03-25 02:45:56.005402 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:45:56.005419 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-25 02:45:56.005424 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:45:56.005428 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:45:56.005432 | orchestrator | 2026-03-25 02:45:56.005437 | orchestrator | 2026-03-25 02:45:56.005441 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:45:56.005445 | orchestrator | Wednesday 25 March 2026 02:45:55 +0000 (0:00:02.200) 0:01:57.629 ******* 2026-03-25 02:45:56.005450 | orchestrator | =============================================================================== 2026-03-25 02:45:56.005454 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 73.81s 2026-03-25 02:45:56.005459 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.79s 2026-03-25 02:45:56.005487 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.79s 2026-03-25 02:45:56.005492 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.20s 2026-03-25 02:45:56.005496 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.97s 2026-03-25 02:45:56.005501 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.71s 2026-03-25 02:45:56.005505 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.71s 2026-03-25 02:45:56.005509 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.64s 2026-03-25 02:45:56.005513 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.48s 2026-03-25 02:45:56.005518 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.42s 2026-03-25 02:45:56.005522 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.40s 2026-03-25 02:45:56.005526 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.33s 2026-03-25 02:45:56.005531 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.28s 2026-03-25 02:45:56.005535 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2026-03-25 02:45:56.005542 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.96s 2026-03-25 02:45:56.005547 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.87s 2026-03-25 02:45:56.005551 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.84s 2026-03-25 02:45:56.005555 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.79s 2026-03-25 02:45:56.005560 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.76s 2026-03-25 02:45:56.005564 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-25 02:45:58.752885 | orchestrator | 2026-03-25 02:45:58 | INFO  | Task f88af198-1c5c-45f3-9abf-53bb8b351781 (openvswitch) was prepared for execution. 2026-03-25 02:45:58.752971 | orchestrator | 2026-03-25 02:45:58 | INFO  | It takes a moment until task f88af198-1c5c-45f3-9abf-53bb8b351781 (openvswitch) has been started and output is visible here. 2026-03-25 02:46:12.429121 | orchestrator | 2026-03-25 02:46:12.429226 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 02:46:12.429242 | orchestrator | 2026-03-25 02:46:12.429251 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 02:46:12.429258 | orchestrator | Wednesday 25 March 2026 02:46:03 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-03-25 02:46:12.429263 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:46:12.429271 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:46:12.429278 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:46:12.429284 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:46:12.429295 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:46:12.429301 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:46:12.429308 | orchestrator | 2026-03-25 02:46:12.429315 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 02:46:12.429322 | orchestrator | Wednesday 25 March 2026 02:46:04 +0000 (0:00:00.776) 0:00:01.055 ******* 2026-03-25 02:46:12.429328 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 02:46:12.429336 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 02:46:12.429342 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 02:46:12.429349 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 02:46:12.429373 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 02:46:12.429387 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 02:46:12.429391 | orchestrator | 2026-03-25 02:46:12.429415 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-25 02:46:12.429420 | orchestrator | 2026-03-25 02:46:12.429425 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-25 02:46:12.429429 | orchestrator | Wednesday 25 March 2026 02:46:04 +0000 (0:00:00.702) 0:00:01.757 ******* 2026-03-25 02:46:12.429435 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:46:12.429441 | orchestrator | 2026-03-25 02:46:12.429445 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-25 02:46:12.429449 | orchestrator | Wednesday 25 March 2026 02:46:06 +0000 (0:00:01.216) 0:00:02.974 ******* 2026-03-25 02:46:12.429454 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-25 02:46:12.429460 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-25 02:46:12.429505 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-25 02:46:12.429514 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-25 02:46:12.429520 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-25 02:46:12.429527 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-25 02:46:12.429533 | orchestrator | 2026-03-25 02:46:12.429540 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-25 02:46:12.429546 | orchestrator | Wednesday 25 March 2026 02:46:07 +0000 (0:00:01.234) 0:00:04.209 ******* 2026-03-25 02:46:12.429553 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-25 02:46:12.429559 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-25 02:46:12.429566 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-25 02:46:12.429573 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-25 02:46:12.429579 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-25 02:46:12.429586 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-25 02:46:12.429593 | orchestrator | 2026-03-25 02:46:12.429599 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-25 02:46:12.429606 | orchestrator | Wednesday 25 March 2026 02:46:08 +0000 (0:00:01.486) 0:00:05.695 ******* 2026-03-25 02:46:12.429613 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-25 02:46:12.429624 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:46:12.429632 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-25 02:46:12.429638 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:46:12.429644 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-25 02:46:12.429652 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:46:12.429658 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-25 02:46:12.429665 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:46:12.429672 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-25 02:46:12.429679 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:46:12.429686 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-25 02:46:12.429692 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:46:12.429698 | orchestrator | 2026-03-25 02:46:12.429706 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-25 02:46:12.429712 | orchestrator | Wednesday 25 March 2026 02:46:10 +0000 (0:00:01.373) 0:00:07.069 ******* 2026-03-25 02:46:12.429716 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:46:12.429720 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:46:12.429725 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:46:12.429729 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:46:12.429733 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:46:12.429738 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:46:12.429742 | orchestrator | 2026-03-25 02:46:12.429748 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-25 02:46:12.429765 | orchestrator | Wednesday 25 March 2026 02:46:11 +0000 (0:00:00.845) 0:00:07.915 ******* 2026-03-25 02:46:12.429797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:12.429811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:12.429819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:12.429905 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:12.429928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:12.429945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847942 | orchestrator | 2026-03-25 02:46:14.847949 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-25 02:46:14.847956 | orchestrator | Wednesday 25 March 2026 02:46:12 +0000 (0:00:01.456) 0:00:09.371 ******* 2026-03-25 02:46:14.847961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:14.847990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:14.848000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669633 | orchestrator | 2026-03-25 02:46:17.669642 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-25 02:46:17.669652 | orchestrator | Wednesday 25 March 2026 02:46:14 +0000 (0:00:02.423) 0:00:11.795 ******* 2026-03-25 02:46:17.669660 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:46:17.669669 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:46:17.669677 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:46:17.669685 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:46:17.669702 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:46:17.669710 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:46:17.669718 | orchestrator | 2026-03-25 02:46:17.669726 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-25 02:46:17.669734 | orchestrator | Wednesday 25 March 2026 02:46:15 +0000 (0:00:01.001) 0:00:12.797 ******* 2026-03-25 02:46:17.669742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:17.669796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:43.238279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 02:46:43.238383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:43.238392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:43.238425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:43.238431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:43.238450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:43.238456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 02:46:43.238462 | orchestrator | 2026-03-25 02:46:43.238468 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 02:46:43.238474 | orchestrator | Wednesday 25 March 2026 02:46:17 +0000 (0:00:01.801) 0:00:14.598 ******* 2026-03-25 02:46:43.238479 | orchestrator | 2026-03-25 02:46:43.238484 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 02:46:43.238489 | orchestrator | Wednesday 25 March 2026 02:46:18 +0000 (0:00:00.378) 0:00:14.977 ******* 2026-03-25 02:46:43.238499 | orchestrator | 2026-03-25 02:46:43.238505 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 02:46:43.238513 | orchestrator | Wednesday 25 March 2026 02:46:18 +0000 (0:00:00.164) 0:00:15.142 ******* 2026-03-25 02:46:43.238525 | orchestrator | 2026-03-25 02:46:43.238535 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 02:46:43.238542 | orchestrator | Wednesday 25 March 2026 02:46:18 +0000 (0:00:00.197) 0:00:15.339 ******* 2026-03-25 02:46:43.238550 | orchestrator | 2026-03-25 02:46:43.238557 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 02:46:43.238619 | orchestrator | Wednesday 25 March 2026 02:46:18 +0000 (0:00:00.218) 0:00:15.557 ******* 2026-03-25 02:46:43.238627 | orchestrator | 2026-03-25 02:46:43.238635 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 02:46:43.238643 | orchestrator | Wednesday 25 March 2026 02:46:18 +0000 (0:00:00.144) 0:00:15.702 ******* 2026-03-25 02:46:43.238651 | orchestrator | 2026-03-25 02:46:43.238658 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-25 02:46:43.238662 | orchestrator | Wednesday 25 March 2026 02:46:18 +0000 (0:00:00.143) 0:00:15.846 ******* 2026-03-25 02:46:43.238667 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:46:43.238674 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:46:43.238678 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:46:43.238683 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:46:43.238688 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:46:43.238692 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:46:43.238697 | orchestrator | 2026-03-25 02:46:43.238702 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-25 02:46:43.238708 | orchestrator | Wednesday 25 March 2026 02:46:28 +0000 (0:00:09.104) 0:00:24.951 ******* 2026-03-25 02:46:43.238717 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:46:43.238724 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:46:43.238728 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:46:43.238733 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:46:43.238738 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:46:43.238742 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:46:43.238747 | orchestrator | 2026-03-25 02:46:43.238752 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-25 02:46:43.238757 | orchestrator | Wednesday 25 March 2026 02:46:29 +0000 (0:00:01.126) 0:00:26.077 ******* 2026-03-25 02:46:43.238762 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:46:43.238767 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:46:43.238772 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:46:43.238777 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:46:43.238782 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:46:43.238786 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:46:43.238791 | orchestrator | 2026-03-25 02:46:43.238796 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-25 02:46:43.238800 | orchestrator | Wednesday 25 March 2026 02:46:36 +0000 (0:00:07.767) 0:00:33.844 ******* 2026-03-25 02:46:43.238805 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-25 02:46:43.238811 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-25 02:46:43.238816 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-25 02:46:43.238820 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-25 02:46:43.238825 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-25 02:46:43.238830 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-25 02:46:43.238835 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-25 02:46:43.238850 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-25 02:46:56.547445 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-25 02:46:56.547547 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-25 02:46:56.547557 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-25 02:46:56.547565 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-25 02:46:56.547572 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 02:46:56.547579 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 02:46:56.547586 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 02:46:56.547593 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 02:46:56.547643 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 02:46:56.547652 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 02:46:56.547660 | orchestrator | 2026-03-25 02:46:56.547668 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-25 02:46:56.547677 | orchestrator | Wednesday 25 March 2026 02:46:43 +0000 (0:00:06.229) 0:00:40.074 ******* 2026-03-25 02:46:56.547686 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-25 02:46:56.547693 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:46:56.547702 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-25 02:46:56.547709 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:46:56.547717 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-25 02:46:56.547724 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:46:56.547731 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-25 02:46:56.547739 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-25 02:46:56.547746 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-25 02:46:56.547753 | orchestrator | 2026-03-25 02:46:56.547761 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-25 02:46:56.547769 | orchestrator | Wednesday 25 March 2026 02:46:45 +0000 (0:00:02.367) 0:00:42.441 ******* 2026-03-25 02:46:56.547776 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-25 02:46:56.547783 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:46:56.547790 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-25 02:46:56.547798 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:46:56.547805 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-25 02:46:56.547813 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:46:56.547820 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-25 02:46:56.547828 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-25 02:46:56.547850 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-25 02:46:56.547858 | orchestrator | 2026-03-25 02:46:56.547866 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-25 02:46:56.547874 | orchestrator | Wednesday 25 March 2026 02:46:48 +0000 (0:00:03.130) 0:00:45.572 ******* 2026-03-25 02:46:56.547881 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:46:56.547889 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:46:56.547917 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:46:56.547925 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:46:56.547933 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:46:56.547940 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:46:56.547947 | orchestrator | 2026-03-25 02:46:56.547954 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:46:56.547963 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 02:46:56.547971 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 02:46:56.547978 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 02:46:56.547985 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 02:46:56.547991 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 02:46:56.547998 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 02:46:56.548004 | orchestrator | 2026-03-25 02:46:56.548010 | orchestrator | 2026-03-25 02:46:56.548017 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:46:56.548026 | orchestrator | Wednesday 25 March 2026 02:46:56 +0000 (0:00:07.336) 0:00:52.908 ******* 2026-03-25 02:46:56.548050 | orchestrator | =============================================================================== 2026-03-25 02:46:56.548060 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.10s 2026-03-25 02:46:56.548069 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.10s 2026-03-25 02:46:56.548077 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.23s 2026-03-25 02:46:56.548085 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.13s 2026-03-25 02:46:56.548094 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.42s 2026-03-25 02:46:56.548102 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.37s 2026-03-25 02:46:56.548110 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.80s 2026-03-25 02:46:56.548118 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.49s 2026-03-25 02:46:56.548126 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.46s 2026-03-25 02:46:56.548135 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.37s 2026-03-25 02:46:56.548143 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.25s 2026-03-25 02:46:56.548151 | orchestrator | module-load : Load modules ---------------------------------------------- 1.23s 2026-03-25 02:46:56.548159 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.22s 2026-03-25 02:46:56.548167 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.13s 2026-03-25 02:46:56.548174 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.00s 2026-03-25 02:46:56.548181 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.85s 2026-03-25 02:46:56.548188 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2026-03-25 02:46:56.548196 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-03-25 02:46:59.272047 | orchestrator | 2026-03-25 02:46:59 | INFO  | Task ad04941a-b007-484c-a716-31308e9e7222 (ovn) was prepared for execution. 2026-03-25 02:46:59.272156 | orchestrator | 2026-03-25 02:46:59 | INFO  | It takes a moment until task ad04941a-b007-484c-a716-31308e9e7222 (ovn) has been started and output is visible here. 2026-03-25 02:47:11.031395 | orchestrator | 2026-03-25 02:47:11.031501 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 02:47:11.031511 | orchestrator | 2026-03-25 02:47:11.031519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 02:47:11.031526 | orchestrator | Wednesday 25 March 2026 02:47:03 +0000 (0:00:00.181) 0:00:00.181 ******* 2026-03-25 02:47:11.031533 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:47:11.031541 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:47:11.031547 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:47:11.031554 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:47:11.031561 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:47:11.031568 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:47:11.031574 | orchestrator | 2026-03-25 02:47:11.031581 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 02:47:11.031588 | orchestrator | Wednesday 25 March 2026 02:47:04 +0000 (0:00:00.793) 0:00:00.974 ******* 2026-03-25 02:47:11.031609 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-25 02:47:11.031616 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-25 02:47:11.031623 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-25 02:47:11.031630 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-25 02:47:11.031636 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-25 02:47:11.031722 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-25 02:47:11.031729 | orchestrator | 2026-03-25 02:47:11.031736 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-25 02:47:11.031743 | orchestrator | 2026-03-25 02:47:11.031750 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-25 02:47:11.031756 | orchestrator | Wednesday 25 March 2026 02:47:05 +0000 (0:00:00.899) 0:00:01.874 ******* 2026-03-25 02:47:11.031764 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:47:11.031772 | orchestrator | 2026-03-25 02:47:11.031779 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-25 02:47:11.031785 | orchestrator | Wednesday 25 March 2026 02:47:06 +0000 (0:00:01.294) 0:00:03.168 ******* 2026-03-25 02:47:11.031793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031875 | orchestrator | 2026-03-25 02:47:11.031882 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-25 02:47:11.031888 | orchestrator | Wednesday 25 March 2026 02:47:08 +0000 (0:00:01.381) 0:00:04.550 ******* 2026-03-25 02:47:11.031902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031947 | orchestrator | 2026-03-25 02:47:11.031954 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-25 02:47:11.031962 | orchestrator | Wednesday 25 March 2026 02:47:09 +0000 (0:00:01.482) 0:00:06.032 ******* 2026-03-25 02:47:11.031967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:11.031987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239672 | orchestrator | 2026-03-25 02:47:32.239685 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-25 02:47:32.239748 | orchestrator | Wednesday 25 March 2026 02:47:11 +0000 (0:00:01.169) 0:00:07.202 ******* 2026-03-25 02:47:32.239762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239878 | orchestrator | 2026-03-25 02:47:32.239889 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-25 02:47:32.239901 | orchestrator | Wednesday 25 March 2026 02:47:12 +0000 (0:00:01.454) 0:00:08.657 ******* 2026-03-25 02:47:32.239920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:47:32.239996 | orchestrator | 2026-03-25 02:47:32.240009 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-25 02:47:32.240023 | orchestrator | Wednesday 25 March 2026 02:47:13 +0000 (0:00:01.421) 0:00:10.078 ******* 2026-03-25 02:47:32.240036 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:47:32.240050 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:47:32.240062 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:47:32.240075 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:47:32.240087 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:47:32.240099 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:47:32.240111 | orchestrator | 2026-03-25 02:47:32.240124 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-25 02:47:32.240137 | orchestrator | Wednesday 25 March 2026 02:47:16 +0000 (0:00:02.207) 0:00:12.285 ******* 2026-03-25 02:47:32.240149 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-25 02:47:32.240161 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-25 02:47:32.240172 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-25 02:47:32.240183 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-25 02:47:32.240194 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-25 02:47:32.240204 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-25 02:47:32.240222 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 02:48:12.114523 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 02:48:12.114651 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 02:48:12.114674 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 02:48:12.114681 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 02:48:12.114687 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 02:48:12.114693 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-25 02:48:12.114701 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-25 02:48:12.114725 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-25 02:48:12.114731 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-25 02:48:12.114737 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-25 02:48:12.114742 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-25 02:48:12.114749 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 02:48:12.114756 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 02:48:12.114761 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 02:48:12.114767 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 02:48:12.114773 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 02:48:12.114779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 02:48:12.114785 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 02:48:12.114791 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 02:48:12.114833 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 02:48:12.114844 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 02:48:12.114852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 02:48:12.114858 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 02:48:12.114864 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 02:48:12.114870 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 02:48:12.114876 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 02:48:12.114882 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 02:48:12.114887 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 02:48:12.114893 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 02:48:12.114899 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-25 02:48:12.114905 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-25 02:48:12.114911 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-25 02:48:12.114917 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-25 02:48:12.114922 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-25 02:48:12.114928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-25 02:48:12.114934 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-25 02:48:12.114968 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-25 02:48:12.115016 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-25 02:48:12.115036 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-25 02:48:12.115048 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-25 02:48:12.115058 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-25 02:48:12.115068 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-25 02:48:12.115075 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-25 02:48:12.115082 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-25 02:48:12.115088 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-25 02:48:12.115095 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-25 02:48:12.115102 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-25 02:48:12.115108 | orchestrator | 2026-03-25 02:48:12.115116 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 02:48:12.115133 | orchestrator | Wednesday 25 March 2026 02:47:31 +0000 (0:00:15.474) 0:00:27.760 ******* 2026-03-25 02:48:12.115147 | orchestrator | 2026-03-25 02:48:12.115154 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 02:48:12.115161 | orchestrator | Wednesday 25 March 2026 02:47:31 +0000 (0:00:00.274) 0:00:28.035 ******* 2026-03-25 02:48:12.115167 | orchestrator | 2026-03-25 02:48:12.115173 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 02:48:12.115180 | orchestrator | Wednesday 25 March 2026 02:47:31 +0000 (0:00:00.070) 0:00:28.105 ******* 2026-03-25 02:48:12.115186 | orchestrator | 2026-03-25 02:48:12.115192 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 02:48:12.115199 | orchestrator | Wednesday 25 March 2026 02:47:31 +0000 (0:00:00.074) 0:00:28.180 ******* 2026-03-25 02:48:12.115205 | orchestrator | 2026-03-25 02:48:12.115211 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 02:48:12.115217 | orchestrator | Wednesday 25 March 2026 02:47:32 +0000 (0:00:00.078) 0:00:28.258 ******* 2026-03-25 02:48:12.115222 | orchestrator | 2026-03-25 02:48:12.115228 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 02:48:12.115234 | orchestrator | Wednesday 25 March 2026 02:47:32 +0000 (0:00:00.072) 0:00:28.331 ******* 2026-03-25 02:48:12.115239 | orchestrator | 2026-03-25 02:48:12.115245 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-25 02:48:12.115251 | orchestrator | Wednesday 25 March 2026 02:47:32 +0000 (0:00:00.073) 0:00:28.404 ******* 2026-03-25 02:48:12.115257 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:48:12.115264 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:48:12.115269 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:48:12.115275 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:12.115281 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:12.115286 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:12.115292 | orchestrator | 2026-03-25 02:48:12.115297 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-25 02:48:12.115303 | orchestrator | Wednesday 25 March 2026 02:47:33 +0000 (0:00:01.486) 0:00:29.890 ******* 2026-03-25 02:48:12.115316 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:48:12.115322 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:48:12.115327 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:48:12.115333 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:48:12.115338 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:48:12.115344 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:48:12.115350 | orchestrator | 2026-03-25 02:48:12.115355 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-25 02:48:12.115361 | orchestrator | 2026-03-25 02:48:12.115367 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-25 02:48:12.115373 | orchestrator | Wednesday 25 March 2026 02:48:09 +0000 (0:00:35.877) 0:01:05.768 ******* 2026-03-25 02:48:12.115378 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:48:12.115384 | orchestrator | 2026-03-25 02:48:12.115390 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-25 02:48:12.115395 | orchestrator | Wednesday 25 March 2026 02:48:10 +0000 (0:00:00.851) 0:01:06.620 ******* 2026-03-25 02:48:12.115401 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:48:12.115407 | orchestrator | 2026-03-25 02:48:12.115413 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-25 02:48:12.115418 | orchestrator | Wednesday 25 March 2026 02:48:11 +0000 (0:00:00.590) 0:01:07.210 ******* 2026-03-25 02:48:12.115424 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:12.115430 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:12.115435 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:12.115441 | orchestrator | 2026-03-25 02:48:12.115447 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-25 02:48:12.115459 | orchestrator | Wednesday 25 March 2026 02:48:12 +0000 (0:00:01.073) 0:01:08.284 ******* 2026-03-25 02:48:24.701460 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:24.701581 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:24.701596 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:24.701608 | orchestrator | 2026-03-25 02:48:24.701620 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-25 02:48:24.701650 | orchestrator | Wednesday 25 March 2026 02:48:12 +0000 (0:00:00.371) 0:01:08.656 ******* 2026-03-25 02:48:24.701662 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:24.701674 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:24.701685 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:24.701696 | orchestrator | 2026-03-25 02:48:24.701707 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-25 02:48:24.701717 | orchestrator | Wednesday 25 March 2026 02:48:12 +0000 (0:00:00.409) 0:01:09.066 ******* 2026-03-25 02:48:24.701728 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:24.701739 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:24.701750 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:24.701761 | orchestrator | 2026-03-25 02:48:24.701773 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-25 02:48:24.701786 | orchestrator | Wednesday 25 March 2026 02:48:13 +0000 (0:00:00.368) 0:01:09.435 ******* 2026-03-25 02:48:24.701798 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:24.701810 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:24.701823 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:24.701914 | orchestrator | 2026-03-25 02:48:24.701927 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-25 02:48:24.701940 | orchestrator | Wednesday 25 March 2026 02:48:13 +0000 (0:00:00.594) 0:01:10.029 ******* 2026-03-25 02:48:24.701952 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.701964 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.701976 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.701990 | orchestrator | 2026-03-25 02:48:24.702004 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-25 02:48:24.702106 | orchestrator | Wednesday 25 March 2026 02:48:14 +0000 (0:00:00.367) 0:01:10.396 ******* 2026-03-25 02:48:24.702120 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702132 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702143 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702154 | orchestrator | 2026-03-25 02:48:24.702166 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-25 02:48:24.702177 | orchestrator | Wednesday 25 March 2026 02:48:14 +0000 (0:00:00.351) 0:01:10.748 ******* 2026-03-25 02:48:24.702189 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702203 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702213 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702224 | orchestrator | 2026-03-25 02:48:24.702234 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-25 02:48:24.702245 | orchestrator | Wednesday 25 March 2026 02:48:14 +0000 (0:00:00.351) 0:01:11.099 ******* 2026-03-25 02:48:24.702257 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702268 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702280 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702290 | orchestrator | 2026-03-25 02:48:24.702301 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-25 02:48:24.702313 | orchestrator | Wednesday 25 March 2026 02:48:15 +0000 (0:00:00.330) 0:01:11.430 ******* 2026-03-25 02:48:24.702323 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702334 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702344 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702354 | orchestrator | 2026-03-25 02:48:24.702382 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-25 02:48:24.702404 | orchestrator | Wednesday 25 March 2026 02:48:15 +0000 (0:00:00.655) 0:01:12.085 ******* 2026-03-25 02:48:24.702415 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702425 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702436 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702447 | orchestrator | 2026-03-25 02:48:24.702459 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-25 02:48:24.702469 | orchestrator | Wednesday 25 March 2026 02:48:16 +0000 (0:00:00.382) 0:01:12.468 ******* 2026-03-25 02:48:24.702480 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702490 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702501 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702514 | orchestrator | 2026-03-25 02:48:24.702525 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-25 02:48:24.702535 | orchestrator | Wednesday 25 March 2026 02:48:16 +0000 (0:00:00.331) 0:01:12.799 ******* 2026-03-25 02:48:24.702547 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702558 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702569 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702579 | orchestrator | 2026-03-25 02:48:24.702589 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-25 02:48:24.702600 | orchestrator | Wednesday 25 March 2026 02:48:16 +0000 (0:00:00.369) 0:01:13.169 ******* 2026-03-25 02:48:24.702610 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702621 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702633 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702643 | orchestrator | 2026-03-25 02:48:24.702654 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-25 02:48:24.702665 | orchestrator | Wednesday 25 March 2026 02:48:17 +0000 (0:00:00.627) 0:01:13.797 ******* 2026-03-25 02:48:24.702676 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702687 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702698 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702709 | orchestrator | 2026-03-25 02:48:24.702720 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-25 02:48:24.702747 | orchestrator | Wednesday 25 March 2026 02:48:17 +0000 (0:00:00.341) 0:01:14.138 ******* 2026-03-25 02:48:24.702758 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702769 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702779 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702789 | orchestrator | 2026-03-25 02:48:24.702801 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-25 02:48:24.702812 | orchestrator | Wednesday 25 March 2026 02:48:18 +0000 (0:00:00.335) 0:01:14.473 ******* 2026-03-25 02:48:24.702915 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.702930 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.702940 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.702950 | orchestrator | 2026-03-25 02:48:24.702961 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-25 02:48:24.702984 | orchestrator | Wednesday 25 March 2026 02:48:18 +0000 (0:00:00.368) 0:01:14.842 ******* 2026-03-25 02:48:24.702996 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:48:24.703007 | orchestrator | 2026-03-25 02:48:24.703018 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-25 02:48:24.703030 | orchestrator | Wednesday 25 March 2026 02:48:19 +0000 (0:00:00.874) 0:01:15.717 ******* 2026-03-25 02:48:24.703041 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:24.703052 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:24.703063 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:24.703073 | orchestrator | 2026-03-25 02:48:24.703083 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-25 02:48:24.703094 | orchestrator | Wednesday 25 March 2026 02:48:20 +0000 (0:00:00.489) 0:01:16.206 ******* 2026-03-25 02:48:24.703105 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:24.703115 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:24.703125 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:24.703136 | orchestrator | 2026-03-25 02:48:24.703146 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-25 02:48:24.703157 | orchestrator | Wednesday 25 March 2026 02:48:20 +0000 (0:00:00.467) 0:01:16.674 ******* 2026-03-25 02:48:24.703167 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.703178 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.703188 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.703200 | orchestrator | 2026-03-25 02:48:24.703210 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-25 02:48:24.703220 | orchestrator | Wednesday 25 March 2026 02:48:20 +0000 (0:00:00.368) 0:01:17.042 ******* 2026-03-25 02:48:24.703231 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.703242 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.703253 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.703263 | orchestrator | 2026-03-25 02:48:24.703276 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-25 02:48:24.703286 | orchestrator | Wednesday 25 March 2026 02:48:21 +0000 (0:00:00.651) 0:01:17.693 ******* 2026-03-25 02:48:24.703296 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.703307 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.703318 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.703328 | orchestrator | 2026-03-25 02:48:24.703340 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-25 02:48:24.703350 | orchestrator | Wednesday 25 March 2026 02:48:21 +0000 (0:00:00.349) 0:01:18.043 ******* 2026-03-25 02:48:24.703363 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.703377 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.703389 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.703400 | orchestrator | 2026-03-25 02:48:24.703412 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-25 02:48:24.703424 | orchestrator | Wednesday 25 March 2026 02:48:22 +0000 (0:00:00.385) 0:01:18.429 ******* 2026-03-25 02:48:24.703451 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.703462 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.703473 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.703484 | orchestrator | 2026-03-25 02:48:24.703496 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-25 02:48:24.703507 | orchestrator | Wednesday 25 March 2026 02:48:22 +0000 (0:00:00.381) 0:01:18.810 ******* 2026-03-25 02:48:24.703518 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:24.703529 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:24.703541 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:24.703552 | orchestrator | 2026-03-25 02:48:24.703563 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-25 02:48:24.703575 | orchestrator | Wednesday 25 March 2026 02:48:23 +0000 (0:00:00.688) 0:01:19.499 ******* 2026-03-25 02:48:24.703590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:24.703604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:24.703615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:24.703653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029470 | orchestrator | 2026-03-25 02:48:31.029483 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-25 02:48:31.029496 | orchestrator | Wednesday 25 March 2026 02:48:24 +0000 (0:00:01.373) 0:01:20.872 ******* 2026-03-25 02:48:31.029509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029659 | orchestrator | 2026-03-25 02:48:31.029670 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-25 02:48:31.029681 | orchestrator | Wednesday 25 March 2026 02:48:28 +0000 (0:00:03.828) 0:01:24.701 ******* 2026-03-25 02:48:31.029692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:31.029762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097430 | orchestrator | 2026-03-25 02:48:55.097435 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-25 02:48:55.097440 | orchestrator | Wednesday 25 March 2026 02:48:30 +0000 (0:00:02.048) 0:01:26.750 ******* 2026-03-25 02:48:55.097444 | orchestrator | 2026-03-25 02:48:55.097447 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-25 02:48:55.097451 | orchestrator | Wednesday 25 March 2026 02:48:30 +0000 (0:00:00.076) 0:01:26.826 ******* 2026-03-25 02:48:55.097455 | orchestrator | 2026-03-25 02:48:55.097459 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-25 02:48:55.097462 | orchestrator | Wednesday 25 March 2026 02:48:30 +0000 (0:00:00.070) 0:01:26.896 ******* 2026-03-25 02:48:55.097466 | orchestrator | 2026-03-25 02:48:55.097470 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-25 02:48:55.097473 | orchestrator | Wednesday 25 March 2026 02:48:31 +0000 (0:00:00.300) 0:01:27.197 ******* 2026-03-25 02:48:55.097479 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:48:55.097486 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:48:55.097492 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:48:55.097501 | orchestrator | 2026-03-25 02:48:55.097508 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-25 02:48:55.097515 | orchestrator | Wednesday 25 March 2026 02:48:33 +0000 (0:00:02.670) 0:01:29.867 ******* 2026-03-25 02:48:55.097520 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:48:55.097526 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:48:55.097531 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:48:55.097536 | orchestrator | 2026-03-25 02:48:55.097542 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-25 02:48:55.097547 | orchestrator | Wednesday 25 March 2026 02:48:41 +0000 (0:00:07.462) 0:01:37.330 ******* 2026-03-25 02:48:55.097553 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:48:55.097558 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:48:55.097564 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:48:55.097569 | orchestrator | 2026-03-25 02:48:55.097575 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-25 02:48:55.097580 | orchestrator | Wednesday 25 March 2026 02:48:48 +0000 (0:00:07.202) 0:01:44.533 ******* 2026-03-25 02:48:55.097585 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:48:55.097591 | orchestrator | 2026-03-25 02:48:55.097597 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-25 02:48:55.097602 | orchestrator | Wednesday 25 March 2026 02:48:48 +0000 (0:00:00.136) 0:01:44.669 ******* 2026-03-25 02:48:55.097608 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:55.097615 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:55.097621 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:55.097626 | orchestrator | 2026-03-25 02:48:55.097632 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-25 02:48:55.097638 | orchestrator | Wednesday 25 March 2026 02:48:49 +0000 (0:00:01.029) 0:01:45.698 ******* 2026-03-25 02:48:55.097643 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:55.097656 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:55.097662 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:48:55.097667 | orchestrator | 2026-03-25 02:48:55.097674 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-25 02:48:55.097680 | orchestrator | Wednesday 25 March 2026 02:48:50 +0000 (0:00:00.565) 0:01:46.263 ******* 2026-03-25 02:48:55.097686 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:55.097693 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:55.097699 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:55.097704 | orchestrator | 2026-03-25 02:48:55.097711 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-25 02:48:55.097730 | orchestrator | Wednesday 25 March 2026 02:48:50 +0000 (0:00:00.745) 0:01:47.009 ******* 2026-03-25 02:48:55.097737 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:48:55.097743 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:48:55.097750 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:48:55.097756 | orchestrator | 2026-03-25 02:48:55.097762 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-25 02:48:55.097769 | orchestrator | Wednesday 25 March 2026 02:48:51 +0000 (0:00:00.564) 0:01:47.573 ******* 2026-03-25 02:48:55.097775 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:55.097779 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:55.097795 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:55.097799 | orchestrator | 2026-03-25 02:48:55.097803 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-25 02:48:55.097807 | orchestrator | Wednesday 25 March 2026 02:48:52 +0000 (0:00:01.278) 0:01:48.852 ******* 2026-03-25 02:48:55.097810 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:55.097814 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:55.097818 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:55.097821 | orchestrator | 2026-03-25 02:48:55.097825 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-25 02:48:55.097829 | orchestrator | Wednesday 25 March 2026 02:48:53 +0000 (0:00:00.779) 0:01:49.631 ******* 2026-03-25 02:48:55.097833 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:48:55.097837 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:48:55.097840 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:48:55.097845 | orchestrator | 2026-03-25 02:48:55.097850 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-25 02:48:55.097854 | orchestrator | Wednesday 25 March 2026 02:48:53 +0000 (0:00:00.342) 0:01:49.974 ******* 2026-03-25 02:48:55.097860 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097867 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097871 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097876 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097885 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097889 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097914 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097922 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:48:55.097932 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814412 | orchestrator | 2026-03-25 02:49:01.814521 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-25 02:49:01.814533 | orchestrator | Wednesday 25 March 2026 02:48:55 +0000 (0:00:01.290) 0:01:51.265 ******* 2026-03-25 02:49:01.814541 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814549 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814553 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814557 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814589 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814610 | orchestrator | 2026-03-25 02:49:01.814615 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-25 02:49:01.814618 | orchestrator | Wednesday 25 March 2026 02:48:58 +0000 (0:00:03.670) 0:01:54.936 ******* 2026-03-25 02:49:01.814635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814640 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814644 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814647 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 02:49:01.814679 | orchestrator | 2026-03-25 02:49:01.814683 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-25 02:49:01.814687 | orchestrator | Wednesday 25 March 2026 02:49:01 +0000 (0:00:02.791) 0:01:57.727 ******* 2026-03-25 02:49:01.814691 | orchestrator | 2026-03-25 02:49:01.814694 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-25 02:49:01.814698 | orchestrator | Wednesday 25 March 2026 02:49:01 +0000 (0:00:00.077) 0:01:57.804 ******* 2026-03-25 02:49:01.814702 | orchestrator | 2026-03-25 02:49:01.814705 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-25 02:49:01.814709 | orchestrator | Wednesday 25 March 2026 02:49:01 +0000 (0:00:00.082) 0:01:57.887 ******* 2026-03-25 02:49:01.814713 | orchestrator | 2026-03-25 02:49:01.814720 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-25 02:49:26.075075 | orchestrator | Wednesday 25 March 2026 02:49:01 +0000 (0:00:00.088) 0:01:57.975 ******* 2026-03-25 02:49:26.075158 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:49:26.075166 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:49:26.075171 | orchestrator | 2026-03-25 02:49:26.075176 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-25 02:49:26.075180 | orchestrator | Wednesday 25 March 2026 02:49:08 +0000 (0:00:06.240) 0:02:04.216 ******* 2026-03-25 02:49:26.075185 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:49:26.075188 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:49:26.075192 | orchestrator | 2026-03-25 02:49:26.075196 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-25 02:49:26.075219 | orchestrator | Wednesday 25 March 2026 02:49:14 +0000 (0:00:06.254) 0:02:10.470 ******* 2026-03-25 02:49:26.075223 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:49:26.075227 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:49:26.075231 | orchestrator | 2026-03-25 02:49:26.075235 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-25 02:49:26.075238 | orchestrator | Wednesday 25 March 2026 02:49:20 +0000 (0:00:06.183) 0:02:16.654 ******* 2026-03-25 02:49:26.075242 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:49:26.075246 | orchestrator | 2026-03-25 02:49:26.075250 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-25 02:49:26.075254 | orchestrator | Wednesday 25 March 2026 02:49:20 +0000 (0:00:00.154) 0:02:16.808 ******* 2026-03-25 02:49:26.075257 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:49:26.075263 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:49:26.075266 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:49:26.075270 | orchestrator | 2026-03-25 02:49:26.075274 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-25 02:49:26.075277 | orchestrator | Wednesday 25 March 2026 02:49:21 +0000 (0:00:01.059) 0:02:17.868 ******* 2026-03-25 02:49:26.075281 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:49:26.075285 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:49:26.075289 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:49:26.075292 | orchestrator | 2026-03-25 02:49:26.075296 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-25 02:49:26.075300 | orchestrator | Wednesday 25 March 2026 02:49:22 +0000 (0:00:00.657) 0:02:18.526 ******* 2026-03-25 02:49:26.075304 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:49:26.075308 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:49:26.075312 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:49:26.075315 | orchestrator | 2026-03-25 02:49:26.075319 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-25 02:49:26.075323 | orchestrator | Wednesday 25 March 2026 02:49:23 +0000 (0:00:00.786) 0:02:19.312 ******* 2026-03-25 02:49:26.075326 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:49:26.075330 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:49:26.075334 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:49:26.075338 | orchestrator | 2026-03-25 02:49:26.075341 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-25 02:49:26.075345 | orchestrator | Wednesday 25 March 2026 02:49:23 +0000 (0:00:00.569) 0:02:19.882 ******* 2026-03-25 02:49:26.075349 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:49:26.075353 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:49:26.075356 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:49:26.075360 | orchestrator | 2026-03-25 02:49:26.075364 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-25 02:49:26.075367 | orchestrator | Wednesday 25 March 2026 02:49:24 +0000 (0:00:01.005) 0:02:20.888 ******* 2026-03-25 02:49:26.075371 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:49:26.075375 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:49:26.075379 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:49:26.075382 | orchestrator | 2026-03-25 02:49:26.075386 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:49:26.075391 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-25 02:49:26.075396 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-25 02:49:26.075400 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-25 02:49:26.075404 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:49:26.075413 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:49:26.075416 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 02:49:26.075420 | orchestrator | 2026-03-25 02:49:26.075424 | orchestrator | 2026-03-25 02:49:26.075437 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:49:26.075441 | orchestrator | Wednesday 25 March 2026 02:49:25 +0000 (0:00:00.900) 0:02:21.788 ******* 2026-03-25 02:49:26.075444 | orchestrator | =============================================================================== 2026-03-25 02:49:26.075448 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.88s 2026-03-25 02:49:26.075452 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 15.47s 2026-03-25 02:49:26.075455 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.72s 2026-03-25 02:49:26.075459 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.39s 2026-03-25 02:49:26.075463 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.91s 2026-03-25 02:49:26.075479 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.83s 2026-03-25 02:49:26.075483 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.67s 2026-03-25 02:49:26.075487 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.79s 2026-03-25 02:49:26.075490 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.21s 2026-03-25 02:49:26.075494 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.05s 2026-03-25 02:49:26.075498 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.49s 2026-03-25 02:49:26.075504 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.48s 2026-03-25 02:49:26.075509 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.45s 2026-03-25 02:49:26.075519 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.42s 2026-03-25 02:49:26.075526 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.38s 2026-03-25 02:49:26.075531 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.37s 2026-03-25 02:49:26.075537 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.29s 2026-03-25 02:49:26.075543 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.29s 2026-03-25 02:49:26.075549 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.28s 2026-03-25 02:49:26.075555 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.17s 2026-03-25 02:49:26.453592 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-25 02:49:26.453672 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-03-25 02:49:28.867882 | orchestrator | 2026-03-25 02:49:28 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-25 02:49:39.106432 | orchestrator | 2026-03-25 02:49:39 | INFO  | Task 15bcd4ec-cfeb-4341-8e96-9084761df7cc (wipe-partitions) was prepared for execution. 2026-03-25 02:49:39.106529 | orchestrator | 2026-03-25 02:49:39 | INFO  | It takes a moment until task 15bcd4ec-cfeb-4341-8e96-9084761df7cc (wipe-partitions) has been started and output is visible here. 2026-03-25 02:49:52.443587 | orchestrator | 2026-03-25 02:49:52.443700 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-25 02:49:52.443714 | orchestrator | 2026-03-25 02:49:52.443724 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-25 02:49:52.443734 | orchestrator | Wednesday 25 March 2026 02:49:44 +0000 (0:00:00.150) 0:00:00.150 ******* 2026-03-25 02:49:52.443769 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:49:52.443780 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:49:52.443788 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:49:52.443797 | orchestrator | 2026-03-25 02:49:52.443823 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-25 02:49:52.443833 | orchestrator | Wednesday 25 March 2026 02:49:44 +0000 (0:00:00.576) 0:00:00.727 ******* 2026-03-25 02:49:52.443842 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:49:52.443850 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:49:52.443858 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:49:52.443866 | orchestrator | 2026-03-25 02:49:52.443874 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-25 02:49:52.443883 | orchestrator | Wednesday 25 March 2026 02:49:45 +0000 (0:00:00.474) 0:00:01.202 ******* 2026-03-25 02:49:52.443891 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:49:52.443901 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:49:52.443920 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:49:52.443928 | orchestrator | 2026-03-25 02:49:52.443937 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-25 02:49:52.443945 | orchestrator | Wednesday 25 March 2026 02:49:45 +0000 (0:00:00.565) 0:00:01.767 ******* 2026-03-25 02:49:52.443954 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:49:52.443962 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:49:52.443971 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:49:52.443980 | orchestrator | 2026-03-25 02:49:52.443989 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-25 02:49:52.443998 | orchestrator | Wednesday 25 March 2026 02:49:46 +0000 (0:00:00.304) 0:00:02.071 ******* 2026-03-25 02:49:52.444007 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-25 02:49:52.444016 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-25 02:49:52.444072 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-25 02:49:52.444082 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-25 02:49:52.444090 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-25 02:49:52.444098 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-25 02:49:52.444122 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-25 02:49:52.444131 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-25 02:49:52.444141 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-25 02:49:52.444150 | orchestrator | 2026-03-25 02:49:52.444159 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-25 02:49:52.444169 | orchestrator | Wednesday 25 March 2026 02:49:47 +0000 (0:00:01.178) 0:00:03.249 ******* 2026-03-25 02:49:52.444178 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-25 02:49:52.444188 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-25 02:49:52.444197 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-25 02:49:52.444207 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-25 02:49:52.444216 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-25 02:49:52.444224 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-25 02:49:52.444233 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-25 02:49:52.444242 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-25 02:49:52.444251 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-25 02:49:52.444260 | orchestrator | 2026-03-25 02:49:52.444269 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-25 02:49:52.444277 | orchestrator | Wednesday 25 March 2026 02:49:48 +0000 (0:00:01.438) 0:00:04.688 ******* 2026-03-25 02:49:52.444285 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-25 02:49:52.444294 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-25 02:49:52.444303 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-25 02:49:52.444311 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-25 02:49:52.444331 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-25 02:49:52.444340 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-25 02:49:52.444348 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-25 02:49:52.444357 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-25 02:49:52.444366 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-25 02:49:52.444375 | orchestrator | 2026-03-25 02:49:52.444382 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-25 02:49:52.444390 | orchestrator | Wednesday 25 March 2026 02:49:50 +0000 (0:00:02.019) 0:00:06.708 ******* 2026-03-25 02:49:52.444398 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:49:52.444406 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:49:52.444413 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:49:52.444420 | orchestrator | 2026-03-25 02:49:52.444428 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-25 02:49:52.444435 | orchestrator | Wednesday 25 March 2026 02:49:51 +0000 (0:00:00.594) 0:00:07.303 ******* 2026-03-25 02:49:52.444443 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:49:52.444450 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:49:52.444458 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:49:52.444466 | orchestrator | 2026-03-25 02:49:52.444475 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:49:52.444485 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:49:52.444495 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:49:52.444526 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:49:52.444537 | orchestrator | 2026-03-25 02:49:52.444546 | orchestrator | 2026-03-25 02:49:52.444554 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:49:52.444563 | orchestrator | Wednesday 25 March 2026 02:49:52 +0000 (0:00:00.662) 0:00:07.965 ******* 2026-03-25 02:49:52.444571 | orchestrator | =============================================================================== 2026-03-25 02:49:52.444580 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.02s 2026-03-25 02:49:52.444589 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.44s 2026-03-25 02:49:52.444597 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2026-03-25 02:49:52.444606 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-03-25 02:49:52.444614 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-03-25 02:49:52.444623 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-03-25 02:49:52.444632 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-03-25 02:49:52.444641 | orchestrator | Remove all rook related logical devices --------------------------------- 0.47s 2026-03-25 02:49:52.444649 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2026-03-25 02:50:05.328610 | orchestrator | 2026-03-25 02:50:05 | INFO  | Task 2123d75b-94c4-4b89-a3fe-41303241467e (facts) was prepared for execution. 2026-03-25 02:50:05.328714 | orchestrator | 2026-03-25 02:50:05 | INFO  | It takes a moment until task 2123d75b-94c4-4b89-a3fe-41303241467e (facts) has been started and output is visible here. 2026-03-25 02:50:19.029199 | orchestrator | 2026-03-25 02:50:19.029322 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-25 02:50:19.029342 | orchestrator | 2026-03-25 02:50:19.029353 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-25 02:50:19.029364 | orchestrator | Wednesday 25 March 2026 02:50:10 +0000 (0:00:00.315) 0:00:00.315 ******* 2026-03-25 02:50:19.029403 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:50:19.029417 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:50:19.029428 | orchestrator | ok: [testbed-manager] 2026-03-25 02:50:19.029439 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:50:19.029448 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:50:19.029454 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:50:19.029460 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:50:19.029467 | orchestrator | 2026-03-25 02:50:19.029473 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-25 02:50:19.029480 | orchestrator | Wednesday 25 March 2026 02:50:11 +0000 (0:00:01.146) 0:00:01.461 ******* 2026-03-25 02:50:19.029486 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:50:19.029494 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:50:19.029500 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:50:19.029506 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:50:19.029512 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:19.029518 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:19.029524 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:50:19.029530 | orchestrator | 2026-03-25 02:50:19.029536 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-25 02:50:19.029543 | orchestrator | 2026-03-25 02:50:19.029549 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-25 02:50:19.029555 | orchestrator | Wednesday 25 March 2026 02:50:12 +0000 (0:00:01.371) 0:00:02.833 ******* 2026-03-25 02:50:19.029561 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:50:19.029578 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:50:19.029584 | orchestrator | ok: [testbed-manager] 2026-03-25 02:50:19.029590 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:50:19.029603 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:50:19.029609 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:50:19.029615 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:50:19.029621 | orchestrator | 2026-03-25 02:50:19.029627 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-25 02:50:19.029633 | orchestrator | 2026-03-25 02:50:19.029639 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-25 02:50:19.029646 | orchestrator | Wednesday 25 March 2026 02:50:17 +0000 (0:00:04.910) 0:00:07.744 ******* 2026-03-25 02:50:19.029652 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:50:19.029658 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:50:19.029664 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:50:19.029670 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:50:19.029676 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:19.029683 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:19.029690 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:50:19.029698 | orchestrator | 2026-03-25 02:50:19.029705 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:50:19.029713 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:50:19.029790 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:50:19.029803 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:50:19.029810 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:50:19.029818 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:50:19.029825 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:50:19.029840 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:50:19.029847 | orchestrator | 2026-03-25 02:50:19.029854 | orchestrator | 2026-03-25 02:50:19.029861 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:50:19.029869 | orchestrator | Wednesday 25 March 2026 02:50:18 +0000 (0:00:00.625) 0:00:08.369 ******* 2026-03-25 02:50:19.029876 | orchestrator | =============================================================================== 2026-03-25 02:50:19.029883 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.91s 2026-03-25 02:50:19.029890 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2026-03-25 02:50:19.029898 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-03-25 02:50:19.029905 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-03-25 02:50:21.801758 | orchestrator | 2026-03-25 02:50:21 | INFO  | Task 95c2e456-5904-4e57-ba07-8edafa6e8bc9 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-25 02:50:21.801844 | orchestrator | 2026-03-25 02:50:21 | INFO  | It takes a moment until task 95c2e456-5904-4e57-ba07-8edafa6e8bc9 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-25 02:50:35.523728 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-25 02:50:35.523889 | orchestrator | 2.16.14 2026-03-25 02:50:35.523907 | orchestrator | 2026-03-25 02:50:35.523920 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-25 02:50:35.523932 | orchestrator | 2026-03-25 02:50:35.523944 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-25 02:50:35.523955 | orchestrator | Wednesday 25 March 2026 02:50:27 +0000 (0:00:00.429) 0:00:00.429 ******* 2026-03-25 02:50:35.523967 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 02:50:35.523979 | orchestrator | 2026-03-25 02:50:35.524014 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-25 02:50:35.524039 | orchestrator | Wednesday 25 March 2026 02:50:27 +0000 (0:00:00.275) 0:00:00.705 ******* 2026-03-25 02:50:35.524067 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:50:35.524084 | orchestrator | 2026-03-25 02:50:35.524101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524152 | orchestrator | Wednesday 25 March 2026 02:50:27 +0000 (0:00:00.276) 0:00:00.981 ******* 2026-03-25 02:50:35.524171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-25 02:50:35.524191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-25 02:50:35.524209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-25 02:50:35.524227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-25 02:50:35.524244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-25 02:50:35.524261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-25 02:50:35.524278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-25 02:50:35.524294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-25 02:50:35.524312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-25 02:50:35.524330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-25 02:50:35.524351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-25 02:50:35.524372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-25 02:50:35.524420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-25 02:50:35.524432 | orchestrator | 2026-03-25 02:50:35.524442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524453 | orchestrator | Wednesday 25 March 2026 02:50:28 +0000 (0:00:00.556) 0:00:01.538 ******* 2026-03-25 02:50:35.524464 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.524476 | orchestrator | 2026-03-25 02:50:35.524487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524498 | orchestrator | Wednesday 25 March 2026 02:50:28 +0000 (0:00:00.218) 0:00:01.756 ******* 2026-03-25 02:50:35.524508 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.524519 | orchestrator | 2026-03-25 02:50:35.524530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524540 | orchestrator | Wednesday 25 March 2026 02:50:28 +0000 (0:00:00.218) 0:00:01.974 ******* 2026-03-25 02:50:35.524551 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.524562 | orchestrator | 2026-03-25 02:50:35.524573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524583 | orchestrator | Wednesday 25 March 2026 02:50:29 +0000 (0:00:00.243) 0:00:02.217 ******* 2026-03-25 02:50:35.524594 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.524605 | orchestrator | 2026-03-25 02:50:35.524616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524626 | orchestrator | Wednesday 25 March 2026 02:50:29 +0000 (0:00:00.200) 0:00:02.418 ******* 2026-03-25 02:50:35.524637 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.524648 | orchestrator | 2026-03-25 02:50:35.524658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524669 | orchestrator | Wednesday 25 March 2026 02:50:29 +0000 (0:00:00.219) 0:00:02.637 ******* 2026-03-25 02:50:35.524679 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.524690 | orchestrator | 2026-03-25 02:50:35.524700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524711 | orchestrator | Wednesday 25 March 2026 02:50:29 +0000 (0:00:00.220) 0:00:02.858 ******* 2026-03-25 02:50:35.524721 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.524732 | orchestrator | 2026-03-25 02:50:35.524743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524753 | orchestrator | Wednesday 25 March 2026 02:50:30 +0000 (0:00:00.214) 0:00:03.072 ******* 2026-03-25 02:50:35.524764 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.524775 | orchestrator | 2026-03-25 02:50:35.524785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524796 | orchestrator | Wednesday 25 March 2026 02:50:30 +0000 (0:00:00.205) 0:00:03.277 ******* 2026-03-25 02:50:35.524807 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130) 2026-03-25 02:50:35.524820 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130) 2026-03-25 02:50:35.524831 | orchestrator | 2026-03-25 02:50:35.524841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524876 | orchestrator | Wednesday 25 March 2026 02:50:30 +0000 (0:00:00.446) 0:00:03.723 ******* 2026-03-25 02:50:35.524888 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1) 2026-03-25 02:50:35.524899 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1) 2026-03-25 02:50:35.524910 | orchestrator | 2026-03-25 02:50:35.524920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.524931 | orchestrator | Wednesday 25 March 2026 02:50:31 +0000 (0:00:00.719) 0:00:04.443 ******* 2026-03-25 02:50:35.524950 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6) 2026-03-25 02:50:35.524970 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6) 2026-03-25 02:50:35.524981 | orchestrator | 2026-03-25 02:50:35.524992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.525002 | orchestrator | Wednesday 25 March 2026 02:50:32 +0000 (0:00:00.736) 0:00:05.179 ******* 2026-03-25 02:50:35.525013 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981) 2026-03-25 02:50:35.525024 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981) 2026-03-25 02:50:35.525034 | orchestrator | 2026-03-25 02:50:35.525045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:35.525056 | orchestrator | Wednesday 25 March 2026 02:50:33 +0000 (0:00:00.995) 0:00:06.174 ******* 2026-03-25 02:50:35.525066 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-25 02:50:35.525077 | orchestrator | 2026-03-25 02:50:35.525087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:35.525098 | orchestrator | Wednesday 25 March 2026 02:50:33 +0000 (0:00:00.386) 0:00:06.561 ******* 2026-03-25 02:50:35.525137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-25 02:50:35.525157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-25 02:50:35.525170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-25 02:50:35.525181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-25 02:50:35.525192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-25 02:50:35.525202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-25 02:50:35.525213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-25 02:50:35.525224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-25 02:50:35.525234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-25 02:50:35.525245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-25 02:50:35.525256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-25 02:50:35.525267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-25 02:50:35.525277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-25 02:50:35.525288 | orchestrator | 2026-03-25 02:50:35.525299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:35.525309 | orchestrator | Wednesday 25 March 2026 02:50:33 +0000 (0:00:00.417) 0:00:06.978 ******* 2026-03-25 02:50:35.525320 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.525331 | orchestrator | 2026-03-25 02:50:35.525342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:35.525352 | orchestrator | Wednesday 25 March 2026 02:50:34 +0000 (0:00:00.235) 0:00:07.213 ******* 2026-03-25 02:50:35.525363 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.525373 | orchestrator | 2026-03-25 02:50:35.525384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:35.525395 | orchestrator | Wednesday 25 March 2026 02:50:34 +0000 (0:00:00.224) 0:00:07.438 ******* 2026-03-25 02:50:35.525406 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.525416 | orchestrator | 2026-03-25 02:50:35.525427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:35.525438 | orchestrator | Wednesday 25 March 2026 02:50:34 +0000 (0:00:00.212) 0:00:07.651 ******* 2026-03-25 02:50:35.525457 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.525468 | orchestrator | 2026-03-25 02:50:35.525478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:35.525489 | orchestrator | Wednesday 25 March 2026 02:50:34 +0000 (0:00:00.234) 0:00:07.886 ******* 2026-03-25 02:50:35.525500 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.525511 | orchestrator | 2026-03-25 02:50:35.525521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:35.525532 | orchestrator | Wednesday 25 March 2026 02:50:35 +0000 (0:00:00.230) 0:00:08.116 ******* 2026-03-25 02:50:35.525543 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.525553 | orchestrator | 2026-03-25 02:50:35.525564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:35.525575 | orchestrator | Wednesday 25 March 2026 02:50:35 +0000 (0:00:00.235) 0:00:08.352 ******* 2026-03-25 02:50:35.525586 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:35.525596 | orchestrator | 2026-03-25 02:50:35.525614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:44.084086 | orchestrator | Wednesday 25 March 2026 02:50:35 +0000 (0:00:00.205) 0:00:08.558 ******* 2026-03-25 02:50:44.084207 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084216 | orchestrator | 2026-03-25 02:50:44.084221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:44.084226 | orchestrator | Wednesday 25 March 2026 02:50:35 +0000 (0:00:00.239) 0:00:08.797 ******* 2026-03-25 02:50:44.084230 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-25 02:50:44.084236 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-25 02:50:44.084240 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-25 02:50:44.084257 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-25 02:50:44.084261 | orchestrator | 2026-03-25 02:50:44.084266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:44.084270 | orchestrator | Wednesday 25 March 2026 02:50:36 +0000 (0:00:01.197) 0:00:09.994 ******* 2026-03-25 02:50:44.084274 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084278 | orchestrator | 2026-03-25 02:50:44.084282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:44.084285 | orchestrator | Wednesday 25 March 2026 02:50:37 +0000 (0:00:00.256) 0:00:10.251 ******* 2026-03-25 02:50:44.084289 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084293 | orchestrator | 2026-03-25 02:50:44.084297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:44.084301 | orchestrator | Wednesday 25 March 2026 02:50:37 +0000 (0:00:00.230) 0:00:10.482 ******* 2026-03-25 02:50:44.084304 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084308 | orchestrator | 2026-03-25 02:50:44.084312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:44.084316 | orchestrator | Wednesday 25 March 2026 02:50:37 +0000 (0:00:00.227) 0:00:10.710 ******* 2026-03-25 02:50:44.084320 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084323 | orchestrator | 2026-03-25 02:50:44.084327 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-25 02:50:44.084331 | orchestrator | Wednesday 25 March 2026 02:50:37 +0000 (0:00:00.266) 0:00:10.976 ******* 2026-03-25 02:50:44.084335 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-25 02:50:44.084339 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-25 02:50:44.084343 | orchestrator | 2026-03-25 02:50:44.084347 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-25 02:50:44.084350 | orchestrator | Wednesday 25 March 2026 02:50:38 +0000 (0:00:00.192) 0:00:11.168 ******* 2026-03-25 02:50:44.084354 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084358 | orchestrator | 2026-03-25 02:50:44.084361 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-25 02:50:44.084365 | orchestrator | Wednesday 25 March 2026 02:50:38 +0000 (0:00:00.145) 0:00:11.314 ******* 2026-03-25 02:50:44.084387 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084391 | orchestrator | 2026-03-25 02:50:44.084395 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-25 02:50:44.084399 | orchestrator | Wednesday 25 March 2026 02:50:38 +0000 (0:00:00.155) 0:00:11.470 ******* 2026-03-25 02:50:44.084403 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084406 | orchestrator | 2026-03-25 02:50:44.084410 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-25 02:50:44.084414 | orchestrator | Wednesday 25 March 2026 02:50:38 +0000 (0:00:00.165) 0:00:11.635 ******* 2026-03-25 02:50:44.084418 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:50:44.084422 | orchestrator | 2026-03-25 02:50:44.084426 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-25 02:50:44.084429 | orchestrator | Wednesday 25 March 2026 02:50:38 +0000 (0:00:00.154) 0:00:11.790 ******* 2026-03-25 02:50:44.084434 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7f517e2-016b-5c10-ac21-20c48339115f'}}) 2026-03-25 02:50:44.084438 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb637af-fcba-56ed-b416-856a8f376a6e'}}) 2026-03-25 02:50:44.084442 | orchestrator | 2026-03-25 02:50:44.084446 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-25 02:50:44.084450 | orchestrator | Wednesday 25 March 2026 02:50:38 +0000 (0:00:00.184) 0:00:11.975 ******* 2026-03-25 02:50:44.084454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7f517e2-016b-5c10-ac21-20c48339115f'}})  2026-03-25 02:50:44.084459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb637af-fcba-56ed-b416-856a8f376a6e'}})  2026-03-25 02:50:44.084463 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084466 | orchestrator | 2026-03-25 02:50:44.084470 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-25 02:50:44.084474 | orchestrator | Wednesday 25 March 2026 02:50:39 +0000 (0:00:00.415) 0:00:12.390 ******* 2026-03-25 02:50:44.084478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7f517e2-016b-5c10-ac21-20c48339115f'}})  2026-03-25 02:50:44.084482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb637af-fcba-56ed-b416-856a8f376a6e'}})  2026-03-25 02:50:44.084485 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084489 | orchestrator | 2026-03-25 02:50:44.084493 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-25 02:50:44.084497 | orchestrator | Wednesday 25 March 2026 02:50:39 +0000 (0:00:00.184) 0:00:12.575 ******* 2026-03-25 02:50:44.084500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7f517e2-016b-5c10-ac21-20c48339115f'}})  2026-03-25 02:50:44.084516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb637af-fcba-56ed-b416-856a8f376a6e'}})  2026-03-25 02:50:44.084520 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084524 | orchestrator | 2026-03-25 02:50:44.084529 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-25 02:50:44.084532 | orchestrator | Wednesday 25 March 2026 02:50:39 +0000 (0:00:00.174) 0:00:12.750 ******* 2026-03-25 02:50:44.084536 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:50:44.084540 | orchestrator | 2026-03-25 02:50:44.084544 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-25 02:50:44.084550 | orchestrator | Wednesday 25 March 2026 02:50:39 +0000 (0:00:00.166) 0:00:12.917 ******* 2026-03-25 02:50:44.084554 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:50:44.084558 | orchestrator | 2026-03-25 02:50:44.084562 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-25 02:50:44.084566 | orchestrator | Wednesday 25 March 2026 02:50:40 +0000 (0:00:00.161) 0:00:13.078 ******* 2026-03-25 02:50:44.084573 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084584 | orchestrator | 2026-03-25 02:50:44.084588 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-25 02:50:44.084591 | orchestrator | Wednesday 25 March 2026 02:50:40 +0000 (0:00:00.146) 0:00:13.224 ******* 2026-03-25 02:50:44.084595 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084599 | orchestrator | 2026-03-25 02:50:44.084602 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-25 02:50:44.084606 | orchestrator | Wednesday 25 March 2026 02:50:40 +0000 (0:00:00.137) 0:00:13.361 ******* 2026-03-25 02:50:44.084610 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084614 | orchestrator | 2026-03-25 02:50:44.084618 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-25 02:50:44.084621 | orchestrator | Wednesday 25 March 2026 02:50:40 +0000 (0:00:00.146) 0:00:13.508 ******* 2026-03-25 02:50:44.084625 | orchestrator | ok: [testbed-node-3] => { 2026-03-25 02:50:44.084629 | orchestrator |  "ceph_osd_devices": { 2026-03-25 02:50:44.084633 | orchestrator |  "sdb": { 2026-03-25 02:50:44.084637 | orchestrator |  "osd_lvm_uuid": "a7f517e2-016b-5c10-ac21-20c48339115f" 2026-03-25 02:50:44.084640 | orchestrator |  }, 2026-03-25 02:50:44.084644 | orchestrator |  "sdc": { 2026-03-25 02:50:44.084648 | orchestrator |  "osd_lvm_uuid": "2eb637af-fcba-56ed-b416-856a8f376a6e" 2026-03-25 02:50:44.084652 | orchestrator |  } 2026-03-25 02:50:44.084656 | orchestrator |  } 2026-03-25 02:50:44.084659 | orchestrator | } 2026-03-25 02:50:44.084663 | orchestrator | 2026-03-25 02:50:44.084667 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-25 02:50:44.084671 | orchestrator | Wednesday 25 March 2026 02:50:40 +0000 (0:00:00.139) 0:00:13.647 ******* 2026-03-25 02:50:44.084674 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084678 | orchestrator | 2026-03-25 02:50:44.084682 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-25 02:50:44.084686 | orchestrator | Wednesday 25 March 2026 02:50:40 +0000 (0:00:00.144) 0:00:13.792 ******* 2026-03-25 02:50:44.084689 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084693 | orchestrator | 2026-03-25 02:50:44.084697 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-25 02:50:44.084701 | orchestrator | Wednesday 25 March 2026 02:50:40 +0000 (0:00:00.144) 0:00:13.937 ******* 2026-03-25 02:50:44.084704 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:50:44.084708 | orchestrator | 2026-03-25 02:50:44.084712 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-25 02:50:44.084715 | orchestrator | Wednesday 25 March 2026 02:50:41 +0000 (0:00:00.138) 0:00:14.076 ******* 2026-03-25 02:50:44.084719 | orchestrator | changed: [testbed-node-3] => { 2026-03-25 02:50:44.084723 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-25 02:50:44.084727 | orchestrator |  "ceph_osd_devices": { 2026-03-25 02:50:44.084731 | orchestrator |  "sdb": { 2026-03-25 02:50:44.084734 | orchestrator |  "osd_lvm_uuid": "a7f517e2-016b-5c10-ac21-20c48339115f" 2026-03-25 02:50:44.084738 | orchestrator |  }, 2026-03-25 02:50:44.084742 | orchestrator |  "sdc": { 2026-03-25 02:50:44.084746 | orchestrator |  "osd_lvm_uuid": "2eb637af-fcba-56ed-b416-856a8f376a6e" 2026-03-25 02:50:44.084750 | orchestrator |  } 2026-03-25 02:50:44.084753 | orchestrator |  }, 2026-03-25 02:50:44.084757 | orchestrator |  "lvm_volumes": [ 2026-03-25 02:50:44.084761 | orchestrator |  { 2026-03-25 02:50:44.084765 | orchestrator |  "data": "osd-block-a7f517e2-016b-5c10-ac21-20c48339115f", 2026-03-25 02:50:44.084769 | orchestrator |  "data_vg": "ceph-a7f517e2-016b-5c10-ac21-20c48339115f" 2026-03-25 02:50:44.084772 | orchestrator |  }, 2026-03-25 02:50:44.084776 | orchestrator |  { 2026-03-25 02:50:44.084780 | orchestrator |  "data": "osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e", 2026-03-25 02:50:44.084788 | orchestrator |  "data_vg": "ceph-2eb637af-fcba-56ed-b416-856a8f376a6e" 2026-03-25 02:50:44.084791 | orchestrator |  } 2026-03-25 02:50:44.084795 | orchestrator |  ] 2026-03-25 02:50:44.084799 | orchestrator |  } 2026-03-25 02:50:44.084802 | orchestrator | } 2026-03-25 02:50:44.084806 | orchestrator | 2026-03-25 02:50:44.084810 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-25 02:50:44.084814 | orchestrator | Wednesday 25 March 2026 02:50:41 +0000 (0:00:00.487) 0:00:14.564 ******* 2026-03-25 02:50:44.084817 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 02:50:44.084821 | orchestrator | 2026-03-25 02:50:44.084825 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-25 02:50:44.084829 | orchestrator | 2026-03-25 02:50:44.084832 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-25 02:50:44.084836 | orchestrator | Wednesday 25 March 2026 02:50:43 +0000 (0:00:01.990) 0:00:16.554 ******* 2026-03-25 02:50:44.084840 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-25 02:50:44.084844 | orchestrator | 2026-03-25 02:50:44.084847 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-25 02:50:44.084851 | orchestrator | Wednesday 25 March 2026 02:50:43 +0000 (0:00:00.289) 0:00:16.843 ******* 2026-03-25 02:50:44.084855 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:50:44.084859 | orchestrator | 2026-03-25 02:50:44.084865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.514924 | orchestrator | Wednesday 25 March 2026 02:50:44 +0000 (0:00:00.274) 0:00:17.118 ******* 2026-03-25 02:50:54.515033 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-25 02:50:54.515046 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-25 02:50:54.515055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-25 02:50:54.515091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-25 02:50:54.515108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-25 02:50:54.515117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-25 02:50:54.515125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-25 02:50:54.515133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-25 02:50:54.515196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-25 02:50:54.515210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-25 02:50:54.515224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-25 02:50:54.515237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-25 02:50:54.515250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-25 02:50:54.515262 | orchestrator | 2026-03-25 02:50:54.515271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515278 | orchestrator | Wednesday 25 March 2026 02:50:44 +0000 (0:00:00.461) 0:00:17.580 ******* 2026-03-25 02:50:54.515286 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.515295 | orchestrator | 2026-03-25 02:50:54.515303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515311 | orchestrator | Wednesday 25 March 2026 02:50:44 +0000 (0:00:00.248) 0:00:17.829 ******* 2026-03-25 02:50:54.515319 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.515326 | orchestrator | 2026-03-25 02:50:54.515334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515342 | orchestrator | Wednesday 25 March 2026 02:50:45 +0000 (0:00:00.237) 0:00:18.066 ******* 2026-03-25 02:50:54.515372 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.515380 | orchestrator | 2026-03-25 02:50:54.515388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515396 | orchestrator | Wednesday 25 March 2026 02:50:45 +0000 (0:00:00.218) 0:00:18.285 ******* 2026-03-25 02:50:54.515403 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.515411 | orchestrator | 2026-03-25 02:50:54.515431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515448 | orchestrator | Wednesday 25 March 2026 02:50:45 +0000 (0:00:00.696) 0:00:18.981 ******* 2026-03-25 02:50:54.515457 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.515467 | orchestrator | 2026-03-25 02:50:54.515476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515484 | orchestrator | Wednesday 25 March 2026 02:50:46 +0000 (0:00:00.234) 0:00:19.216 ******* 2026-03-25 02:50:54.515493 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.515502 | orchestrator | 2026-03-25 02:50:54.515511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515523 | orchestrator | Wednesday 25 March 2026 02:50:46 +0000 (0:00:00.239) 0:00:19.455 ******* 2026-03-25 02:50:54.515537 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.515557 | orchestrator | 2026-03-25 02:50:54.515574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515587 | orchestrator | Wednesday 25 March 2026 02:50:46 +0000 (0:00:00.289) 0:00:19.745 ******* 2026-03-25 02:50:54.515601 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.515615 | orchestrator | 2026-03-25 02:50:54.515628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515640 | orchestrator | Wednesday 25 March 2026 02:50:46 +0000 (0:00:00.230) 0:00:19.976 ******* 2026-03-25 02:50:54.515655 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529) 2026-03-25 02:50:54.515671 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529) 2026-03-25 02:50:54.515685 | orchestrator | 2026-03-25 02:50:54.515701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515753 | orchestrator | Wednesday 25 March 2026 02:50:47 +0000 (0:00:00.515) 0:00:20.491 ******* 2026-03-25 02:50:54.515774 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f) 2026-03-25 02:50:54.515796 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f) 2026-03-25 02:50:54.515806 | orchestrator | 2026-03-25 02:50:54.515816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515826 | orchestrator | Wednesday 25 March 2026 02:50:47 +0000 (0:00:00.487) 0:00:20.978 ******* 2026-03-25 02:50:54.515836 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347) 2026-03-25 02:50:54.515846 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347) 2026-03-25 02:50:54.515856 | orchestrator | 2026-03-25 02:50:54.515866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515893 | orchestrator | Wednesday 25 March 2026 02:50:48 +0000 (0:00:00.497) 0:00:21.476 ******* 2026-03-25 02:50:54.515903 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11) 2026-03-25 02:50:54.515913 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11) 2026-03-25 02:50:54.515922 | orchestrator | 2026-03-25 02:50:54.515932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:50:54.515950 | orchestrator | Wednesday 25 March 2026 02:50:49 +0000 (0:00:00.769) 0:00:22.245 ******* 2026-03-25 02:50:54.515960 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-25 02:50:54.515980 | orchestrator | 2026-03-25 02:50:54.515989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.515999 | orchestrator | Wednesday 25 March 2026 02:50:49 +0000 (0:00:00.634) 0:00:22.879 ******* 2026-03-25 02:50:54.516008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-25 02:50:54.516018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-25 02:50:54.516027 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-25 02:50:54.516037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-25 02:50:54.516046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-25 02:50:54.516055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-25 02:50:54.516064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-25 02:50:54.516073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-25 02:50:54.516083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-25 02:50:54.516092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-25 02:50:54.516102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-25 02:50:54.516111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-25 02:50:54.516121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-25 02:50:54.516130 | orchestrator | 2026-03-25 02:50:54.516166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516178 | orchestrator | Wednesday 25 March 2026 02:50:50 +0000 (0:00:01.007) 0:00:23.887 ******* 2026-03-25 02:50:54.516187 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.516197 | orchestrator | 2026-03-25 02:50:54.516206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516216 | orchestrator | Wednesday 25 March 2026 02:50:51 +0000 (0:00:00.277) 0:00:24.164 ******* 2026-03-25 02:50:54.516225 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.516234 | orchestrator | 2026-03-25 02:50:54.516244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516254 | orchestrator | Wednesday 25 March 2026 02:50:51 +0000 (0:00:00.223) 0:00:24.388 ******* 2026-03-25 02:50:54.516263 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.516273 | orchestrator | 2026-03-25 02:50:54.516282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516292 | orchestrator | Wednesday 25 March 2026 02:50:51 +0000 (0:00:00.226) 0:00:24.615 ******* 2026-03-25 02:50:54.516302 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.516311 | orchestrator | 2026-03-25 02:50:54.516321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516330 | orchestrator | Wednesday 25 March 2026 02:50:51 +0000 (0:00:00.213) 0:00:24.829 ******* 2026-03-25 02:50:54.516339 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.516349 | orchestrator | 2026-03-25 02:50:54.516358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516368 | orchestrator | Wednesday 25 March 2026 02:50:52 +0000 (0:00:00.256) 0:00:25.085 ******* 2026-03-25 02:50:54.516377 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.516386 | orchestrator | 2026-03-25 02:50:54.516396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516405 | orchestrator | Wednesday 25 March 2026 02:50:52 +0000 (0:00:00.237) 0:00:25.323 ******* 2026-03-25 02:50:54.516415 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.516453 | orchestrator | 2026-03-25 02:50:54.516463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516473 | orchestrator | Wednesday 25 March 2026 02:50:52 +0000 (0:00:00.242) 0:00:25.566 ******* 2026-03-25 02:50:54.516482 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:50:54.516492 | orchestrator | 2026-03-25 02:50:54.516501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516511 | orchestrator | Wednesday 25 March 2026 02:50:52 +0000 (0:00:00.234) 0:00:25.800 ******* 2026-03-25 02:50:54.516521 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-25 02:50:54.516531 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-25 02:50:54.516541 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-25 02:50:54.516551 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-25 02:50:54.516560 | orchestrator | 2026-03-25 02:50:54.516569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:50:54.516579 | orchestrator | Wednesday 25 March 2026 02:50:53 +0000 (0:00:00.994) 0:00:26.794 ******* 2026-03-25 02:50:54.516589 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.127714 | orchestrator | 2026-03-25 02:51:01.127841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:01.127859 | orchestrator | Wednesday 25 March 2026 02:50:54 +0000 (0:00:00.756) 0:00:27.551 ******* 2026-03-25 02:51:01.127871 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.127883 | orchestrator | 2026-03-25 02:51:01.127894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:01.127906 | orchestrator | Wednesday 25 March 2026 02:50:54 +0000 (0:00:00.245) 0:00:27.797 ******* 2026-03-25 02:51:01.127935 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.127946 | orchestrator | 2026-03-25 02:51:01.127957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:01.127968 | orchestrator | Wednesday 25 March 2026 02:50:54 +0000 (0:00:00.214) 0:00:28.012 ******* 2026-03-25 02:51:01.127979 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.127990 | orchestrator | 2026-03-25 02:51:01.128001 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-25 02:51:01.128012 | orchestrator | Wednesday 25 March 2026 02:50:55 +0000 (0:00:00.227) 0:00:28.240 ******* 2026-03-25 02:51:01.128023 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-25 02:51:01.128035 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-25 02:51:01.128046 | orchestrator | 2026-03-25 02:51:01.128057 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-25 02:51:01.128068 | orchestrator | Wednesday 25 March 2026 02:50:55 +0000 (0:00:00.186) 0:00:28.426 ******* 2026-03-25 02:51:01.128079 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.128090 | orchestrator | 2026-03-25 02:51:01.128101 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-25 02:51:01.128111 | orchestrator | Wednesday 25 March 2026 02:50:55 +0000 (0:00:00.133) 0:00:28.560 ******* 2026-03-25 02:51:01.128122 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.128133 | orchestrator | 2026-03-25 02:51:01.128144 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-25 02:51:01.128250 | orchestrator | Wednesday 25 March 2026 02:50:55 +0000 (0:00:00.141) 0:00:28.701 ******* 2026-03-25 02:51:01.128266 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.128280 | orchestrator | 2026-03-25 02:51:01.128293 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-25 02:51:01.128306 | orchestrator | Wednesday 25 March 2026 02:50:55 +0000 (0:00:00.152) 0:00:28.854 ******* 2026-03-25 02:51:01.128320 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:51:01.128334 | orchestrator | 2026-03-25 02:51:01.128345 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-25 02:51:01.128356 | orchestrator | Wednesday 25 March 2026 02:50:55 +0000 (0:00:00.150) 0:00:29.004 ******* 2026-03-25 02:51:01.128395 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '82366886-ea97-5dba-b5cd-187414e0593f'}}) 2026-03-25 02:51:01.128407 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fa1f2bca-96f4-5f59-9dac-c3efdd146138'}}) 2026-03-25 02:51:01.128418 | orchestrator | 2026-03-25 02:51:01.128429 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-25 02:51:01.128440 | orchestrator | Wednesday 25 March 2026 02:50:56 +0000 (0:00:00.175) 0:00:29.180 ******* 2026-03-25 02:51:01.128451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '82366886-ea97-5dba-b5cd-187414e0593f'}})  2026-03-25 02:51:01.128465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fa1f2bca-96f4-5f59-9dac-c3efdd146138'}})  2026-03-25 02:51:01.128475 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.128486 | orchestrator | 2026-03-25 02:51:01.128497 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-25 02:51:01.128508 | orchestrator | Wednesday 25 March 2026 02:50:56 +0000 (0:00:00.169) 0:00:29.350 ******* 2026-03-25 02:51:01.128519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '82366886-ea97-5dba-b5cd-187414e0593f'}})  2026-03-25 02:51:01.128530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fa1f2bca-96f4-5f59-9dac-c3efdd146138'}})  2026-03-25 02:51:01.128541 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.128559 | orchestrator | 2026-03-25 02:51:01.128578 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-25 02:51:01.128607 | orchestrator | Wednesday 25 March 2026 02:50:56 +0000 (0:00:00.421) 0:00:29.772 ******* 2026-03-25 02:51:01.128627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '82366886-ea97-5dba-b5cd-187414e0593f'}})  2026-03-25 02:51:01.128645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fa1f2bca-96f4-5f59-9dac-c3efdd146138'}})  2026-03-25 02:51:01.128661 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.128677 | orchestrator | 2026-03-25 02:51:01.128695 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-25 02:51:01.128711 | orchestrator | Wednesday 25 March 2026 02:50:56 +0000 (0:00:00.171) 0:00:29.944 ******* 2026-03-25 02:51:01.128730 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:51:01.128748 | orchestrator | 2026-03-25 02:51:01.128766 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-25 02:51:01.128785 | orchestrator | Wednesday 25 March 2026 02:50:57 +0000 (0:00:00.172) 0:00:30.116 ******* 2026-03-25 02:51:01.128804 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:51:01.128824 | orchestrator | 2026-03-25 02:51:01.128843 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-25 02:51:01.128861 | orchestrator | Wednesday 25 March 2026 02:50:57 +0000 (0:00:00.143) 0:00:30.260 ******* 2026-03-25 02:51:01.128906 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.128920 | orchestrator | 2026-03-25 02:51:01.128931 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-25 02:51:01.128942 | orchestrator | Wednesday 25 March 2026 02:50:57 +0000 (0:00:00.142) 0:00:30.402 ******* 2026-03-25 02:51:01.128952 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.128963 | orchestrator | 2026-03-25 02:51:01.128974 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-25 02:51:01.128984 | orchestrator | Wednesday 25 March 2026 02:50:57 +0000 (0:00:00.177) 0:00:30.580 ******* 2026-03-25 02:51:01.129005 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.129016 | orchestrator | 2026-03-25 02:51:01.129027 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-25 02:51:01.129037 | orchestrator | Wednesday 25 March 2026 02:50:57 +0000 (0:00:00.169) 0:00:30.749 ******* 2026-03-25 02:51:01.129059 | orchestrator | ok: [testbed-node-4] => { 2026-03-25 02:51:01.129070 | orchestrator |  "ceph_osd_devices": { 2026-03-25 02:51:01.129082 | orchestrator |  "sdb": { 2026-03-25 02:51:01.129093 | orchestrator |  "osd_lvm_uuid": "82366886-ea97-5dba-b5cd-187414e0593f" 2026-03-25 02:51:01.129103 | orchestrator |  }, 2026-03-25 02:51:01.129114 | orchestrator |  "sdc": { 2026-03-25 02:51:01.129125 | orchestrator |  "osd_lvm_uuid": "fa1f2bca-96f4-5f59-9dac-c3efdd146138" 2026-03-25 02:51:01.129136 | orchestrator |  } 2026-03-25 02:51:01.129146 | orchestrator |  } 2026-03-25 02:51:01.129190 | orchestrator | } 2026-03-25 02:51:01.129202 | orchestrator | 2026-03-25 02:51:01.129213 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-25 02:51:01.129224 | orchestrator | Wednesday 25 March 2026 02:50:57 +0000 (0:00:00.157) 0:00:30.906 ******* 2026-03-25 02:51:01.129235 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.129246 | orchestrator | 2026-03-25 02:51:01.129256 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-25 02:51:01.129267 | orchestrator | Wednesday 25 March 2026 02:50:58 +0000 (0:00:00.153) 0:00:31.060 ******* 2026-03-25 02:51:01.129278 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.129289 | orchestrator | 2026-03-25 02:51:01.129299 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-25 02:51:01.129310 | orchestrator | Wednesday 25 March 2026 02:50:58 +0000 (0:00:00.139) 0:00:31.199 ******* 2026-03-25 02:51:01.129321 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:51:01.129331 | orchestrator | 2026-03-25 02:51:01.129342 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-25 02:51:01.129353 | orchestrator | Wednesday 25 March 2026 02:50:58 +0000 (0:00:00.158) 0:00:31.357 ******* 2026-03-25 02:51:01.129364 | orchestrator | changed: [testbed-node-4] => { 2026-03-25 02:51:01.129374 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-25 02:51:01.129385 | orchestrator |  "ceph_osd_devices": { 2026-03-25 02:51:01.129396 | orchestrator |  "sdb": { 2026-03-25 02:51:01.129407 | orchestrator |  "osd_lvm_uuid": "82366886-ea97-5dba-b5cd-187414e0593f" 2026-03-25 02:51:01.129418 | orchestrator |  }, 2026-03-25 02:51:01.129429 | orchestrator |  "sdc": { 2026-03-25 02:51:01.129439 | orchestrator |  "osd_lvm_uuid": "fa1f2bca-96f4-5f59-9dac-c3efdd146138" 2026-03-25 02:51:01.129450 | orchestrator |  } 2026-03-25 02:51:01.129461 | orchestrator |  }, 2026-03-25 02:51:01.129471 | orchestrator |  "lvm_volumes": [ 2026-03-25 02:51:01.129482 | orchestrator |  { 2026-03-25 02:51:01.129493 | orchestrator |  "data": "osd-block-82366886-ea97-5dba-b5cd-187414e0593f", 2026-03-25 02:51:01.129504 | orchestrator |  "data_vg": "ceph-82366886-ea97-5dba-b5cd-187414e0593f" 2026-03-25 02:51:01.129514 | orchestrator |  }, 2026-03-25 02:51:01.129525 | orchestrator |  { 2026-03-25 02:51:01.129536 | orchestrator |  "data": "osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138", 2026-03-25 02:51:01.129547 | orchestrator |  "data_vg": "ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138" 2026-03-25 02:51:01.129558 | orchestrator |  } 2026-03-25 02:51:01.129568 | orchestrator |  ] 2026-03-25 02:51:01.129579 | orchestrator |  } 2026-03-25 02:51:01.129593 | orchestrator | } 2026-03-25 02:51:01.129612 | orchestrator | 2026-03-25 02:51:01.129629 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-25 02:51:01.129648 | orchestrator | Wednesday 25 March 2026 02:50:58 +0000 (0:00:00.523) 0:00:31.881 ******* 2026-03-25 02:51:01.129666 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-25 02:51:01.129683 | orchestrator | 2026-03-25 02:51:01.129702 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-25 02:51:01.129722 | orchestrator | 2026-03-25 02:51:01.129741 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-25 02:51:01.129760 | orchestrator | Wednesday 25 March 2026 02:51:00 +0000 (0:00:01.266) 0:00:33.147 ******* 2026-03-25 02:51:01.129793 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-25 02:51:01.129805 | orchestrator | 2026-03-25 02:51:01.129816 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-25 02:51:01.129826 | orchestrator | Wednesday 25 March 2026 02:51:00 +0000 (0:00:00.301) 0:00:33.448 ******* 2026-03-25 02:51:01.129837 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:51:01.129850 | orchestrator | 2026-03-25 02:51:01.129869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:01.129888 | orchestrator | Wednesday 25 March 2026 02:51:00 +0000 (0:00:00.273) 0:00:33.721 ******* 2026-03-25 02:51:01.129906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-25 02:51:01.129940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-25 02:51:01.129960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-25 02:51:01.129977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-25 02:51:01.129994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-25 02:51:01.130104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-25 02:51:11.176090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-25 02:51:11.176300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-25 02:51:11.176315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-25 02:51:11.176344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-25 02:51:11.176351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-25 02:51:11.176358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-25 02:51:11.176364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-25 02:51:11.176371 | orchestrator | 2026-03-25 02:51:11.176379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176386 | orchestrator | Wednesday 25 March 2026 02:51:01 +0000 (0:00:00.437) 0:00:34.159 ******* 2026-03-25 02:51:11.176393 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176401 | orchestrator | 2026-03-25 02:51:11.176407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176413 | orchestrator | Wednesday 25 March 2026 02:51:01 +0000 (0:00:00.238) 0:00:34.397 ******* 2026-03-25 02:51:11.176419 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176426 | orchestrator | 2026-03-25 02:51:11.176431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176437 | orchestrator | Wednesday 25 March 2026 02:51:01 +0000 (0:00:00.238) 0:00:34.635 ******* 2026-03-25 02:51:11.176444 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176450 | orchestrator | 2026-03-25 02:51:11.176456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176462 | orchestrator | Wednesday 25 March 2026 02:51:01 +0000 (0:00:00.211) 0:00:34.847 ******* 2026-03-25 02:51:11.176468 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176474 | orchestrator | 2026-03-25 02:51:11.176480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176487 | orchestrator | Wednesday 25 March 2026 02:51:02 +0000 (0:00:00.738) 0:00:35.586 ******* 2026-03-25 02:51:11.176493 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176499 | orchestrator | 2026-03-25 02:51:11.176505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176513 | orchestrator | Wednesday 25 March 2026 02:51:02 +0000 (0:00:00.254) 0:00:35.840 ******* 2026-03-25 02:51:11.176547 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176555 | orchestrator | 2026-03-25 02:51:11.176561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176568 | orchestrator | Wednesday 25 March 2026 02:51:03 +0000 (0:00:00.271) 0:00:36.112 ******* 2026-03-25 02:51:11.176573 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176579 | orchestrator | 2026-03-25 02:51:11.176585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176591 | orchestrator | Wednesday 25 March 2026 02:51:03 +0000 (0:00:00.230) 0:00:36.342 ******* 2026-03-25 02:51:11.176598 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176604 | orchestrator | 2026-03-25 02:51:11.176612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176620 | orchestrator | Wednesday 25 March 2026 02:51:03 +0000 (0:00:00.227) 0:00:36.570 ******* 2026-03-25 02:51:11.176627 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2) 2026-03-25 02:51:11.176636 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2) 2026-03-25 02:51:11.176642 | orchestrator | 2026-03-25 02:51:11.176649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176657 | orchestrator | Wednesday 25 March 2026 02:51:04 +0000 (0:00:00.496) 0:00:37.066 ******* 2026-03-25 02:51:11.176664 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29) 2026-03-25 02:51:11.176672 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29) 2026-03-25 02:51:11.176678 | orchestrator | 2026-03-25 02:51:11.176685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176692 | orchestrator | Wednesday 25 March 2026 02:51:04 +0000 (0:00:00.489) 0:00:37.556 ******* 2026-03-25 02:51:11.176698 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7) 2026-03-25 02:51:11.176705 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7) 2026-03-25 02:51:11.176712 | orchestrator | 2026-03-25 02:51:11.176718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176724 | orchestrator | Wednesday 25 March 2026 02:51:05 +0000 (0:00:00.553) 0:00:38.110 ******* 2026-03-25 02:51:11.176731 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519) 2026-03-25 02:51:11.176740 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519) 2026-03-25 02:51:11.176746 | orchestrator | 2026-03-25 02:51:11.176754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:51:11.176761 | orchestrator | Wednesday 25 March 2026 02:51:05 +0000 (0:00:00.533) 0:00:38.643 ******* 2026-03-25 02:51:11.176769 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-25 02:51:11.176776 | orchestrator | 2026-03-25 02:51:11.176783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.176815 | orchestrator | Wednesday 25 March 2026 02:51:05 +0000 (0:00:00.383) 0:00:39.026 ******* 2026-03-25 02:51:11.176824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-25 02:51:11.176832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-25 02:51:11.176839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-25 02:51:11.176855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-25 02:51:11.176862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-25 02:51:11.176869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-25 02:51:11.176884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-25 02:51:11.176891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-25 02:51:11.176898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-25 02:51:11.176906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-25 02:51:11.176913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-25 02:51:11.176919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-25 02:51:11.176925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-25 02:51:11.176932 | orchestrator | 2026-03-25 02:51:11.176938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.176945 | orchestrator | Wednesday 25 March 2026 02:51:06 +0000 (0:00:00.697) 0:00:39.724 ******* 2026-03-25 02:51:11.176951 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176958 | orchestrator | 2026-03-25 02:51:11.176965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.176972 | orchestrator | Wednesday 25 March 2026 02:51:06 +0000 (0:00:00.242) 0:00:39.966 ******* 2026-03-25 02:51:11.176979 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.176985 | orchestrator | 2026-03-25 02:51:11.176992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.176998 | orchestrator | Wednesday 25 March 2026 02:51:07 +0000 (0:00:00.248) 0:00:40.215 ******* 2026-03-25 02:51:11.177005 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177012 | orchestrator | 2026-03-25 02:51:11.177018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177025 | orchestrator | Wednesday 25 March 2026 02:51:07 +0000 (0:00:00.245) 0:00:40.460 ******* 2026-03-25 02:51:11.177031 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177037 | orchestrator | 2026-03-25 02:51:11.177044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177050 | orchestrator | Wednesday 25 March 2026 02:51:07 +0000 (0:00:00.234) 0:00:40.695 ******* 2026-03-25 02:51:11.177055 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177061 | orchestrator | 2026-03-25 02:51:11.177068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177074 | orchestrator | Wednesday 25 March 2026 02:51:07 +0000 (0:00:00.221) 0:00:40.916 ******* 2026-03-25 02:51:11.177079 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177084 | orchestrator | 2026-03-25 02:51:11.177089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177097 | orchestrator | Wednesday 25 March 2026 02:51:08 +0000 (0:00:00.232) 0:00:41.149 ******* 2026-03-25 02:51:11.177103 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177109 | orchestrator | 2026-03-25 02:51:11.177115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177121 | orchestrator | Wednesday 25 March 2026 02:51:08 +0000 (0:00:00.221) 0:00:41.370 ******* 2026-03-25 02:51:11.177127 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177134 | orchestrator | 2026-03-25 02:51:11.177140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177147 | orchestrator | Wednesday 25 March 2026 02:51:08 +0000 (0:00:00.244) 0:00:41.615 ******* 2026-03-25 02:51:11.177154 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-25 02:51:11.177162 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-25 02:51:11.177168 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-25 02:51:11.177194 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-25 02:51:11.177200 | orchestrator | 2026-03-25 02:51:11.177216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177221 | orchestrator | Wednesday 25 March 2026 02:51:09 +0000 (0:00:01.027) 0:00:42.642 ******* 2026-03-25 02:51:11.177228 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177233 | orchestrator | 2026-03-25 02:51:11.177239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177245 | orchestrator | Wednesday 25 March 2026 02:51:09 +0000 (0:00:00.219) 0:00:42.862 ******* 2026-03-25 02:51:11.177251 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177257 | orchestrator | 2026-03-25 02:51:11.177262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177267 | orchestrator | Wednesday 25 March 2026 02:51:10 +0000 (0:00:00.263) 0:00:43.126 ******* 2026-03-25 02:51:11.177272 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177278 | orchestrator | 2026-03-25 02:51:11.177284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:51:11.177290 | orchestrator | Wednesday 25 March 2026 02:51:10 +0000 (0:00:00.845) 0:00:43.972 ******* 2026-03-25 02:51:11.177296 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:11.177301 | orchestrator | 2026-03-25 02:51:11.177317 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-25 02:51:15.719395 | orchestrator | Wednesday 25 March 2026 02:51:11 +0000 (0:00:00.240) 0:00:44.213 ******* 2026-03-25 02:51:15.719522 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-25 02:51:15.719533 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-25 02:51:15.719541 | orchestrator | 2026-03-25 02:51:15.719551 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-25 02:51:15.719583 | orchestrator | Wednesday 25 March 2026 02:51:11 +0000 (0:00:00.209) 0:00:44.422 ******* 2026-03-25 02:51:15.719599 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.719613 | orchestrator | 2026-03-25 02:51:15.719628 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-25 02:51:15.719643 | orchestrator | Wednesday 25 March 2026 02:51:11 +0000 (0:00:00.156) 0:00:44.579 ******* 2026-03-25 02:51:15.719657 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.719672 | orchestrator | 2026-03-25 02:51:15.719683 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-25 02:51:15.719692 | orchestrator | Wednesday 25 March 2026 02:51:11 +0000 (0:00:00.151) 0:00:44.731 ******* 2026-03-25 02:51:15.719700 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.719708 | orchestrator | 2026-03-25 02:51:15.719715 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-25 02:51:15.719723 | orchestrator | Wednesday 25 March 2026 02:51:11 +0000 (0:00:00.131) 0:00:44.862 ******* 2026-03-25 02:51:15.719731 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:51:15.719741 | orchestrator | 2026-03-25 02:51:15.719749 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-25 02:51:15.719756 | orchestrator | Wednesday 25 March 2026 02:51:11 +0000 (0:00:00.140) 0:00:45.003 ******* 2026-03-25 02:51:15.719765 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f303e98e-56ea-50bc-9e1c-3ccda4672060'}}) 2026-03-25 02:51:15.719774 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ec576d5-4336-523a-896e-5358117b2269'}}) 2026-03-25 02:51:15.719782 | orchestrator | 2026-03-25 02:51:15.719790 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-25 02:51:15.719798 | orchestrator | Wednesday 25 March 2026 02:51:12 +0000 (0:00:00.183) 0:00:45.187 ******* 2026-03-25 02:51:15.719806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f303e98e-56ea-50bc-9e1c-3ccda4672060'}})  2026-03-25 02:51:15.719817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ec576d5-4336-523a-896e-5358117b2269'}})  2026-03-25 02:51:15.719826 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.719863 | orchestrator | 2026-03-25 02:51:15.719873 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-25 02:51:15.719881 | orchestrator | Wednesday 25 March 2026 02:51:12 +0000 (0:00:00.173) 0:00:45.361 ******* 2026-03-25 02:51:15.719890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f303e98e-56ea-50bc-9e1c-3ccda4672060'}})  2026-03-25 02:51:15.719900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ec576d5-4336-523a-896e-5358117b2269'}})  2026-03-25 02:51:15.719909 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.719917 | orchestrator | 2026-03-25 02:51:15.719926 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-25 02:51:15.719935 | orchestrator | Wednesday 25 March 2026 02:51:12 +0000 (0:00:00.194) 0:00:45.555 ******* 2026-03-25 02:51:15.719944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f303e98e-56ea-50bc-9e1c-3ccda4672060'}})  2026-03-25 02:51:15.719953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ec576d5-4336-523a-896e-5358117b2269'}})  2026-03-25 02:51:15.719962 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.719972 | orchestrator | 2026-03-25 02:51:15.719980 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-25 02:51:15.719989 | orchestrator | Wednesday 25 March 2026 02:51:12 +0000 (0:00:00.172) 0:00:45.728 ******* 2026-03-25 02:51:15.719998 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:51:15.720007 | orchestrator | 2026-03-25 02:51:15.720015 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-25 02:51:15.720024 | orchestrator | Wednesday 25 March 2026 02:51:12 +0000 (0:00:00.147) 0:00:45.875 ******* 2026-03-25 02:51:15.720033 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:51:15.720042 | orchestrator | 2026-03-25 02:51:15.720051 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-25 02:51:15.720059 | orchestrator | Wednesday 25 March 2026 02:51:13 +0000 (0:00:00.410) 0:00:46.285 ******* 2026-03-25 02:51:15.720068 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.720077 | orchestrator | 2026-03-25 02:51:15.720086 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-25 02:51:15.720095 | orchestrator | Wednesday 25 March 2026 02:51:13 +0000 (0:00:00.168) 0:00:46.454 ******* 2026-03-25 02:51:15.720105 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.720114 | orchestrator | 2026-03-25 02:51:15.720123 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-25 02:51:15.720132 | orchestrator | Wednesday 25 March 2026 02:51:13 +0000 (0:00:00.146) 0:00:46.601 ******* 2026-03-25 02:51:15.720139 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.720147 | orchestrator | 2026-03-25 02:51:15.720155 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-25 02:51:15.720163 | orchestrator | Wednesday 25 March 2026 02:51:13 +0000 (0:00:00.149) 0:00:46.750 ******* 2026-03-25 02:51:15.720171 | orchestrator | ok: [testbed-node-5] => { 2026-03-25 02:51:15.720201 | orchestrator |  "ceph_osd_devices": { 2026-03-25 02:51:15.720210 | orchestrator |  "sdb": { 2026-03-25 02:51:15.720236 | orchestrator |  "osd_lvm_uuid": "f303e98e-56ea-50bc-9e1c-3ccda4672060" 2026-03-25 02:51:15.720245 | orchestrator |  }, 2026-03-25 02:51:15.720260 | orchestrator |  "sdc": { 2026-03-25 02:51:15.720273 | orchestrator |  "osd_lvm_uuid": "8ec576d5-4336-523a-896e-5358117b2269" 2026-03-25 02:51:15.720287 | orchestrator |  } 2026-03-25 02:51:15.720302 | orchestrator |  } 2026-03-25 02:51:15.720317 | orchestrator | } 2026-03-25 02:51:15.720331 | orchestrator | 2026-03-25 02:51:15.720351 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-25 02:51:15.720359 | orchestrator | Wednesday 25 March 2026 02:51:13 +0000 (0:00:00.153) 0:00:46.904 ******* 2026-03-25 02:51:15.720367 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.720383 | orchestrator | 2026-03-25 02:51:15.720391 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-25 02:51:15.720399 | orchestrator | Wednesday 25 March 2026 02:51:14 +0000 (0:00:00.150) 0:00:47.054 ******* 2026-03-25 02:51:15.720406 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.720414 | orchestrator | 2026-03-25 02:51:15.720422 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-25 02:51:15.720429 | orchestrator | Wednesday 25 March 2026 02:51:14 +0000 (0:00:00.157) 0:00:47.212 ******* 2026-03-25 02:51:15.720437 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:51:15.720445 | orchestrator | 2026-03-25 02:51:15.720452 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-25 02:51:15.720460 | orchestrator | Wednesday 25 March 2026 02:51:14 +0000 (0:00:00.149) 0:00:47.362 ******* 2026-03-25 02:51:15.720468 | orchestrator | changed: [testbed-node-5] => { 2026-03-25 02:51:15.720476 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-25 02:51:15.720484 | orchestrator |  "ceph_osd_devices": { 2026-03-25 02:51:15.720492 | orchestrator |  "sdb": { 2026-03-25 02:51:15.720500 | orchestrator |  "osd_lvm_uuid": "f303e98e-56ea-50bc-9e1c-3ccda4672060" 2026-03-25 02:51:15.720508 | orchestrator |  }, 2026-03-25 02:51:15.720515 | orchestrator |  "sdc": { 2026-03-25 02:51:15.720523 | orchestrator |  "osd_lvm_uuid": "8ec576d5-4336-523a-896e-5358117b2269" 2026-03-25 02:51:15.720531 | orchestrator |  } 2026-03-25 02:51:15.720538 | orchestrator |  }, 2026-03-25 02:51:15.720546 | orchestrator |  "lvm_volumes": [ 2026-03-25 02:51:15.720554 | orchestrator |  { 2026-03-25 02:51:15.720562 | orchestrator |  "data": "osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060", 2026-03-25 02:51:15.720570 | orchestrator |  "data_vg": "ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060" 2026-03-25 02:51:15.720583 | orchestrator |  }, 2026-03-25 02:51:15.720596 | orchestrator |  { 2026-03-25 02:51:15.720609 | orchestrator |  "data": "osd-block-8ec576d5-4336-523a-896e-5358117b2269", 2026-03-25 02:51:15.720624 | orchestrator |  "data_vg": "ceph-8ec576d5-4336-523a-896e-5358117b2269" 2026-03-25 02:51:15.720637 | orchestrator |  } 2026-03-25 02:51:15.720650 | orchestrator |  ] 2026-03-25 02:51:15.720664 | orchestrator |  } 2026-03-25 02:51:15.720673 | orchestrator | } 2026-03-25 02:51:15.720680 | orchestrator | 2026-03-25 02:51:15.720688 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-25 02:51:15.720696 | orchestrator | Wednesday 25 March 2026 02:51:14 +0000 (0:00:00.272) 0:00:47.634 ******* 2026-03-25 02:51:15.720704 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-25 02:51:15.720711 | orchestrator | 2026-03-25 02:51:15.720719 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:51:15.720727 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-25 02:51:15.720737 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-25 02:51:15.720745 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-25 02:51:15.720753 | orchestrator | 2026-03-25 02:51:15.720761 | orchestrator | 2026-03-25 02:51:15.720769 | orchestrator | 2026-03-25 02:51:15.720776 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:51:15.720789 | orchestrator | Wednesday 25 March 2026 02:51:15 +0000 (0:00:01.107) 0:00:48.741 ******* 2026-03-25 02:51:15.720801 | orchestrator | =============================================================================== 2026-03-25 02:51:15.720820 | orchestrator | Write configuration file ------------------------------------------------ 4.36s 2026-03-25 02:51:15.720844 | orchestrator | Add known partitions to the list of available block devices ------------- 2.12s 2026-03-25 02:51:15.720857 | orchestrator | Add known links to the list of available block devices ------------------ 1.46s 2026-03-25 02:51:15.720869 | orchestrator | Print configuration data ------------------------------------------------ 1.28s 2026-03-25 02:51:15.720881 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2026-03-25 02:51:15.720892 | orchestrator | Add known partitions to the list of available block devices ------------- 1.03s 2026-03-25 02:51:15.720904 | orchestrator | Add known links to the list of available block devices ------------------ 1.00s 2026-03-25 02:51:15.720916 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-03-25 02:51:15.720928 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2026-03-25 02:51:15.720941 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-03-25 02:51:15.720954 | orchestrator | Get initial list of available block devices ----------------------------- 0.82s 2026-03-25 02:51:15.720967 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.80s 2026-03-25 02:51:15.720981 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-03-25 02:51:15.721004 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.76s 2026-03-25 02:51:16.228905 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-03-25 02:51:16.229031 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-25 02:51:16.229044 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-25 02:51:16.229075 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-25 02:51:16.229082 | orchestrator | Set OSD devices config data --------------------------------------------- 0.72s 2026-03-25 02:51:16.229088 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-25 02:51:39.149699 | orchestrator | 2026-03-25 02:51:39 | INFO  | Task 95c6dc63-785a-474e-8043-5811ee298728 (sync inventory) is running in background. Output coming soon. 2026-03-25 02:52:14.417331 | orchestrator | 2026-03-25 02:51:40 | INFO  | Starting group_vars file reorganization 2026-03-25 02:52:14.417423 | orchestrator | 2026-03-25 02:51:40 | INFO  | Moved 0 file(s) to their respective directories 2026-03-25 02:52:14.417433 | orchestrator | 2026-03-25 02:51:40 | INFO  | Group_vars file reorganization completed 2026-03-25 02:52:14.417439 | orchestrator | 2026-03-25 02:51:45 | INFO  | Starting variable preparation from inventory 2026-03-25 02:52:14.417446 | orchestrator | 2026-03-25 02:51:49 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-25 02:52:14.417452 | orchestrator | 2026-03-25 02:51:49 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-25 02:52:14.417470 | orchestrator | 2026-03-25 02:51:49 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-25 02:52:14.417482 | orchestrator | 2026-03-25 02:51:49 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-25 02:52:14.417488 | orchestrator | 2026-03-25 02:51:49 | INFO  | Variable preparation completed 2026-03-25 02:52:14.417495 | orchestrator | 2026-03-25 02:51:50 | INFO  | Starting inventory overwrite handling 2026-03-25 02:52:14.417501 | orchestrator | 2026-03-25 02:51:50 | INFO  | Handling group overwrites in 99-overwrite 2026-03-25 02:52:14.417507 | orchestrator | 2026-03-25 02:51:50 | INFO  | Removing group frr:children from 60-generic 2026-03-25 02:52:14.417514 | orchestrator | 2026-03-25 02:51:50 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-25 02:52:14.417520 | orchestrator | 2026-03-25 02:51:50 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-25 02:52:14.417550 | orchestrator | 2026-03-25 02:51:50 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-25 02:52:14.417557 | orchestrator | 2026-03-25 02:51:50 | INFO  | Handling group overwrites in 20-roles 2026-03-25 02:52:14.417563 | orchestrator | 2026-03-25 02:51:50 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-25 02:52:14.417570 | orchestrator | 2026-03-25 02:51:50 | INFO  | Removed 5 group(s) in total 2026-03-25 02:52:14.417576 | orchestrator | 2026-03-25 02:51:50 | INFO  | Inventory overwrite handling completed 2026-03-25 02:52:14.417583 | orchestrator | 2026-03-25 02:51:52 | INFO  | Starting merge of inventory files 2026-03-25 02:52:14.417592 | orchestrator | 2026-03-25 02:51:52 | INFO  | Inventory files merged successfully 2026-03-25 02:52:14.417601 | orchestrator | 2026-03-25 02:51:57 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-25 02:52:14.417610 | orchestrator | 2026-03-25 02:52:12 | INFO  | Successfully wrote ClusterShell configuration 2026-03-25 02:52:14.417617 | orchestrator | [master 0117071] 2026-03-25-02-52 2026-03-25 02:52:14.417625 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-25 02:52:17.189784 | orchestrator | 2026-03-25 02:52:17 | INFO  | Task 90d86412-bdb5-4ea9-af8d-872c34c7d4b8 (ceph-create-lvm-devices) was prepared for execution. 2026-03-25 02:52:17.189872 | orchestrator | 2026-03-25 02:52:17 | INFO  | It takes a moment until task 90d86412-bdb5-4ea9-af8d-872c34c7d4b8 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-25 02:52:31.269996 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-25 02:52:31.270208 | orchestrator | 2.16.14 2026-03-25 02:52:31.270234 | orchestrator | 2026-03-25 02:52:31.270250 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-25 02:52:31.270264 | orchestrator | 2026-03-25 02:52:31.270278 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-25 02:52:31.270291 | orchestrator | Wednesday 25 March 2026 02:52:22 +0000 (0:00:00.365) 0:00:00.365 ******* 2026-03-25 02:52:31.270412 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 02:52:31.270428 | orchestrator | 2026-03-25 02:52:31.270441 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-25 02:52:31.270453 | orchestrator | Wednesday 25 March 2026 02:52:23 +0000 (0:00:00.283) 0:00:00.648 ******* 2026-03-25 02:52:31.270464 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:52:31.270477 | orchestrator | 2026-03-25 02:52:31.270489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.270502 | orchestrator | Wednesday 25 March 2026 02:52:23 +0000 (0:00:00.234) 0:00:00.883 ******* 2026-03-25 02:52:31.270516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-25 02:52:31.270529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-25 02:52:31.270561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-25 02:52:31.270576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-25 02:52:31.270589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-25 02:52:31.270602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-25 02:52:31.270614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-25 02:52:31.270626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-25 02:52:31.270639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-25 02:52:31.270652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-25 02:52:31.270694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-25 02:52:31.270708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-25 02:52:31.270722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-25 02:52:31.270735 | orchestrator | 2026-03-25 02:52:31.270748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.270762 | orchestrator | Wednesday 25 March 2026 02:52:23 +0000 (0:00:00.612) 0:00:01.495 ******* 2026-03-25 02:52:31.270774 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.270788 | orchestrator | 2026-03-25 02:52:31.270802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.270815 | orchestrator | Wednesday 25 March 2026 02:52:24 +0000 (0:00:00.230) 0:00:01.726 ******* 2026-03-25 02:52:31.270828 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.270841 | orchestrator | 2026-03-25 02:52:31.270855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.270868 | orchestrator | Wednesday 25 March 2026 02:52:24 +0000 (0:00:00.228) 0:00:01.954 ******* 2026-03-25 02:52:31.270883 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.270896 | orchestrator | 2026-03-25 02:52:31.270908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.270920 | orchestrator | Wednesday 25 March 2026 02:52:24 +0000 (0:00:00.219) 0:00:02.174 ******* 2026-03-25 02:52:31.270932 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.270945 | orchestrator | 2026-03-25 02:52:31.270959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.270973 | orchestrator | Wednesday 25 March 2026 02:52:24 +0000 (0:00:00.225) 0:00:02.400 ******* 2026-03-25 02:52:31.270986 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.270997 | orchestrator | 2026-03-25 02:52:31.271010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.271022 | orchestrator | Wednesday 25 March 2026 02:52:25 +0000 (0:00:00.230) 0:00:02.630 ******* 2026-03-25 02:52:31.271034 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271047 | orchestrator | 2026-03-25 02:52:31.271060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.271073 | orchestrator | Wednesday 25 March 2026 02:52:25 +0000 (0:00:00.211) 0:00:02.842 ******* 2026-03-25 02:52:31.271085 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271099 | orchestrator | 2026-03-25 02:52:31.271110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.271123 | orchestrator | Wednesday 25 March 2026 02:52:25 +0000 (0:00:00.213) 0:00:03.055 ******* 2026-03-25 02:52:31.271136 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271149 | orchestrator | 2026-03-25 02:52:31.271164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.271177 | orchestrator | Wednesday 25 March 2026 02:52:25 +0000 (0:00:00.236) 0:00:03.292 ******* 2026-03-25 02:52:31.271191 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130) 2026-03-25 02:52:31.271206 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130) 2026-03-25 02:52:31.271217 | orchestrator | 2026-03-25 02:52:31.271225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.271256 | orchestrator | Wednesday 25 March 2026 02:52:26 +0000 (0:00:00.470) 0:00:03.763 ******* 2026-03-25 02:52:31.271269 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1) 2026-03-25 02:52:31.271281 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1) 2026-03-25 02:52:31.271322 | orchestrator | 2026-03-25 02:52:31.271337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.271365 | orchestrator | Wednesday 25 March 2026 02:52:26 +0000 (0:00:00.728) 0:00:04.492 ******* 2026-03-25 02:52:31.271377 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6) 2026-03-25 02:52:31.271389 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6) 2026-03-25 02:52:31.271401 | orchestrator | 2026-03-25 02:52:31.271413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.271426 | orchestrator | Wednesday 25 March 2026 02:52:27 +0000 (0:00:00.800) 0:00:05.292 ******* 2026-03-25 02:52:31.271437 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981) 2026-03-25 02:52:31.271460 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981) 2026-03-25 02:52:31.271501 | orchestrator | 2026-03-25 02:52:31.271516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:31.271541 | orchestrator | Wednesday 25 March 2026 02:52:28 +0000 (0:00:00.993) 0:00:06.286 ******* 2026-03-25 02:52:31.271555 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-25 02:52:31.271568 | orchestrator | 2026-03-25 02:52:31.271580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:31.271593 | orchestrator | Wednesday 25 March 2026 02:52:29 +0000 (0:00:00.388) 0:00:06.674 ******* 2026-03-25 02:52:31.271602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-25 02:52:31.271610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-25 02:52:31.271617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-25 02:52:31.271626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-25 02:52:31.271639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-25 02:52:31.271651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-25 02:52:31.271676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-25 02:52:31.271688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-25 02:52:31.271701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-25 02:52:31.271715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-25 02:52:31.271728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-25 02:52:31.271741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-25 02:52:31.271754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-25 02:52:31.271767 | orchestrator | 2026-03-25 02:52:31.271776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:31.271783 | orchestrator | Wednesday 25 March 2026 02:52:29 +0000 (0:00:00.460) 0:00:07.134 ******* 2026-03-25 02:52:31.271791 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271799 | orchestrator | 2026-03-25 02:52:31.271806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:31.271814 | orchestrator | Wednesday 25 March 2026 02:52:29 +0000 (0:00:00.248) 0:00:07.383 ******* 2026-03-25 02:52:31.271821 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271829 | orchestrator | 2026-03-25 02:52:31.271837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:31.271844 | orchestrator | Wednesday 25 March 2026 02:52:30 +0000 (0:00:00.310) 0:00:07.693 ******* 2026-03-25 02:52:31.271852 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271869 | orchestrator | 2026-03-25 02:52:31.271876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:31.271884 | orchestrator | Wednesday 25 March 2026 02:52:30 +0000 (0:00:00.233) 0:00:07.927 ******* 2026-03-25 02:52:31.271892 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271900 | orchestrator | 2026-03-25 02:52:31.271907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:31.271915 | orchestrator | Wednesday 25 March 2026 02:52:30 +0000 (0:00:00.232) 0:00:08.160 ******* 2026-03-25 02:52:31.271923 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271931 | orchestrator | 2026-03-25 02:52:31.271938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:31.271946 | orchestrator | Wednesday 25 March 2026 02:52:30 +0000 (0:00:00.233) 0:00:08.394 ******* 2026-03-25 02:52:31.271954 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271961 | orchestrator | 2026-03-25 02:52:31.271969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:31.271977 | orchestrator | Wednesday 25 March 2026 02:52:31 +0000 (0:00:00.208) 0:00:08.603 ******* 2026-03-25 02:52:31.271984 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:31.271992 | orchestrator | 2026-03-25 02:52:31.272011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:40.048675 | orchestrator | Wednesday 25 March 2026 02:52:31 +0000 (0:00:00.236) 0:00:08.839 ******* 2026-03-25 02:52:40.048784 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.048799 | orchestrator | 2026-03-25 02:52:40.048811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:40.048821 | orchestrator | Wednesday 25 March 2026 02:52:32 +0000 (0:00:00.743) 0:00:09.582 ******* 2026-03-25 02:52:40.048832 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-25 02:52:40.048842 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-25 02:52:40.048852 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-25 02:52:40.048862 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-25 02:52:40.048871 | orchestrator | 2026-03-25 02:52:40.048881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:40.048891 | orchestrator | Wednesday 25 March 2026 02:52:32 +0000 (0:00:00.755) 0:00:10.338 ******* 2026-03-25 02:52:40.048900 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.048910 | orchestrator | 2026-03-25 02:52:40.048919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:40.048929 | orchestrator | Wednesday 25 March 2026 02:52:32 +0000 (0:00:00.222) 0:00:10.561 ******* 2026-03-25 02:52:40.048939 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.048948 | orchestrator | 2026-03-25 02:52:40.048975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:40.048984 | orchestrator | Wednesday 25 March 2026 02:52:33 +0000 (0:00:00.230) 0:00:10.791 ******* 2026-03-25 02:52:40.048994 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049003 | orchestrator | 2026-03-25 02:52:40.049012 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:52:40.049022 | orchestrator | Wednesday 25 March 2026 02:52:33 +0000 (0:00:00.236) 0:00:11.028 ******* 2026-03-25 02:52:40.049031 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049040 | orchestrator | 2026-03-25 02:52:40.049050 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-25 02:52:40.049059 | orchestrator | Wednesday 25 March 2026 02:52:33 +0000 (0:00:00.222) 0:00:11.250 ******* 2026-03-25 02:52:40.049069 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049078 | orchestrator | 2026-03-25 02:52:40.049087 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-25 02:52:40.049097 | orchestrator | Wednesday 25 March 2026 02:52:33 +0000 (0:00:00.144) 0:00:11.395 ******* 2026-03-25 02:52:40.049107 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7f517e2-016b-5c10-ac21-20c48339115f'}}) 2026-03-25 02:52:40.049139 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb637af-fcba-56ed-b416-856a8f376a6e'}}) 2026-03-25 02:52:40.049149 | orchestrator | 2026-03-25 02:52:40.049159 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-25 02:52:40.049169 | orchestrator | Wednesday 25 March 2026 02:52:34 +0000 (0:00:00.230) 0:00:11.625 ******* 2026-03-25 02:52:40.049180 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'}) 2026-03-25 02:52:40.049192 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'}) 2026-03-25 02:52:40.049201 | orchestrator | 2026-03-25 02:52:40.049213 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-25 02:52:40.049225 | orchestrator | Wednesday 25 March 2026 02:52:36 +0000 (0:00:02.155) 0:00:13.781 ******* 2026-03-25 02:52:40.049236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:40.049250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:40.049261 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049272 | orchestrator | 2026-03-25 02:52:40.049283 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-25 02:52:40.049294 | orchestrator | Wednesday 25 March 2026 02:52:36 +0000 (0:00:00.164) 0:00:13.945 ******* 2026-03-25 02:52:40.049305 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'}) 2026-03-25 02:52:40.049343 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'}) 2026-03-25 02:52:40.049354 | orchestrator | 2026-03-25 02:52:40.049365 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-25 02:52:40.049376 | orchestrator | Wednesday 25 March 2026 02:52:37 +0000 (0:00:01.446) 0:00:15.392 ******* 2026-03-25 02:52:40.049387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:40.049399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:40.049410 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049421 | orchestrator | 2026-03-25 02:52:40.049432 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-25 02:52:40.049443 | orchestrator | Wednesday 25 March 2026 02:52:37 +0000 (0:00:00.170) 0:00:15.562 ******* 2026-03-25 02:52:40.049471 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049484 | orchestrator | 2026-03-25 02:52:40.049494 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-25 02:52:40.049506 | orchestrator | Wednesday 25 March 2026 02:52:38 +0000 (0:00:00.390) 0:00:15.953 ******* 2026-03-25 02:52:40.049518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:40.049529 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:40.049540 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049551 | orchestrator | 2026-03-25 02:52:40.049561 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-25 02:52:40.049570 | orchestrator | Wednesday 25 March 2026 02:52:38 +0000 (0:00:00.167) 0:00:16.120 ******* 2026-03-25 02:52:40.049587 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049597 | orchestrator | 2026-03-25 02:52:40.049607 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-25 02:52:40.049616 | orchestrator | Wednesday 25 March 2026 02:52:38 +0000 (0:00:00.159) 0:00:16.280 ******* 2026-03-25 02:52:40.049631 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:40.049641 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:40.049651 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049660 | orchestrator | 2026-03-25 02:52:40.049670 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-25 02:52:40.049680 | orchestrator | Wednesday 25 March 2026 02:52:38 +0000 (0:00:00.161) 0:00:16.442 ******* 2026-03-25 02:52:40.049689 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049698 | orchestrator | 2026-03-25 02:52:40.049708 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-25 02:52:40.049717 | orchestrator | Wednesday 25 March 2026 02:52:39 +0000 (0:00:00.147) 0:00:16.590 ******* 2026-03-25 02:52:40.049727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:40.049736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:40.049746 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049755 | orchestrator | 2026-03-25 02:52:40.049765 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-25 02:52:40.049775 | orchestrator | Wednesday 25 March 2026 02:52:39 +0000 (0:00:00.177) 0:00:16.767 ******* 2026-03-25 02:52:40.049784 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:52:40.049794 | orchestrator | 2026-03-25 02:52:40.049804 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-25 02:52:40.049813 | orchestrator | Wednesday 25 March 2026 02:52:39 +0000 (0:00:00.176) 0:00:16.944 ******* 2026-03-25 02:52:40.049823 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:40.049833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:40.049842 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049852 | orchestrator | 2026-03-25 02:52:40.049861 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-25 02:52:40.049871 | orchestrator | Wednesday 25 March 2026 02:52:39 +0000 (0:00:00.178) 0:00:17.122 ******* 2026-03-25 02:52:40.049881 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:40.049891 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:40.049900 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049910 | orchestrator | 2026-03-25 02:52:40.049919 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-25 02:52:40.049929 | orchestrator | Wednesday 25 March 2026 02:52:39 +0000 (0:00:00.172) 0:00:17.295 ******* 2026-03-25 02:52:40.049939 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:40.049948 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:40.049964 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.049973 | orchestrator | 2026-03-25 02:52:40.049983 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-25 02:52:40.049992 | orchestrator | Wednesday 25 March 2026 02:52:39 +0000 (0:00:00.171) 0:00:17.467 ******* 2026-03-25 02:52:40.050002 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:40.050011 | orchestrator | 2026-03-25 02:52:40.050087 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-25 02:52:40.050105 | orchestrator | Wednesday 25 March 2026 02:52:40 +0000 (0:00:00.153) 0:00:17.621 ******* 2026-03-25 02:52:47.099487 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099565 | orchestrator | 2026-03-25 02:52:47.099572 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-25 02:52:47.099577 | orchestrator | Wednesday 25 March 2026 02:52:40 +0000 (0:00:00.164) 0:00:17.785 ******* 2026-03-25 02:52:47.099581 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099586 | orchestrator | 2026-03-25 02:52:47.099590 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-25 02:52:47.099594 | orchestrator | Wednesday 25 March 2026 02:52:40 +0000 (0:00:00.404) 0:00:18.190 ******* 2026-03-25 02:52:47.099598 | orchestrator | ok: [testbed-node-3] => { 2026-03-25 02:52:47.099603 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-25 02:52:47.099607 | orchestrator | } 2026-03-25 02:52:47.099611 | orchestrator | 2026-03-25 02:52:47.099615 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-25 02:52:47.099619 | orchestrator | Wednesday 25 March 2026 02:52:40 +0000 (0:00:00.150) 0:00:18.340 ******* 2026-03-25 02:52:47.099623 | orchestrator | ok: [testbed-node-3] => { 2026-03-25 02:52:47.099626 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-25 02:52:47.099630 | orchestrator | } 2026-03-25 02:52:47.099634 | orchestrator | 2026-03-25 02:52:47.099638 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-25 02:52:47.099655 | orchestrator | Wednesday 25 March 2026 02:52:40 +0000 (0:00:00.163) 0:00:18.503 ******* 2026-03-25 02:52:47.099659 | orchestrator | ok: [testbed-node-3] => { 2026-03-25 02:52:47.099662 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-25 02:52:47.099667 | orchestrator | } 2026-03-25 02:52:47.099670 | orchestrator | 2026-03-25 02:52:47.099674 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-25 02:52:47.099678 | orchestrator | Wednesday 25 March 2026 02:52:41 +0000 (0:00:00.172) 0:00:18.676 ******* 2026-03-25 02:52:47.099682 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:52:47.099686 | orchestrator | 2026-03-25 02:52:47.099689 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-25 02:52:47.099693 | orchestrator | Wednesday 25 March 2026 02:52:41 +0000 (0:00:00.627) 0:00:19.304 ******* 2026-03-25 02:52:47.099697 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:52:47.099701 | orchestrator | 2026-03-25 02:52:47.099704 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-25 02:52:47.099708 | orchestrator | Wednesday 25 March 2026 02:52:42 +0000 (0:00:00.496) 0:00:19.800 ******* 2026-03-25 02:52:47.099712 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:52:47.099715 | orchestrator | 2026-03-25 02:52:47.099719 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-25 02:52:47.099723 | orchestrator | Wednesday 25 March 2026 02:52:42 +0000 (0:00:00.479) 0:00:20.280 ******* 2026-03-25 02:52:47.099726 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:52:47.099730 | orchestrator | 2026-03-25 02:52:47.099734 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-25 02:52:47.099738 | orchestrator | Wednesday 25 March 2026 02:52:42 +0000 (0:00:00.162) 0:00:20.442 ******* 2026-03-25 02:52:47.099741 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099745 | orchestrator | 2026-03-25 02:52:47.099749 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-25 02:52:47.099789 | orchestrator | Wednesday 25 March 2026 02:52:43 +0000 (0:00:00.140) 0:00:20.583 ******* 2026-03-25 02:52:47.099793 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099797 | orchestrator | 2026-03-25 02:52:47.099801 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-25 02:52:47.099805 | orchestrator | Wednesday 25 March 2026 02:52:43 +0000 (0:00:00.130) 0:00:20.713 ******* 2026-03-25 02:52:47.099808 | orchestrator | ok: [testbed-node-3] => { 2026-03-25 02:52:47.099812 | orchestrator |  "vgs_report": { 2026-03-25 02:52:47.099816 | orchestrator |  "vg": [] 2026-03-25 02:52:47.099820 | orchestrator |  } 2026-03-25 02:52:47.099824 | orchestrator | } 2026-03-25 02:52:47.099827 | orchestrator | 2026-03-25 02:52:47.099831 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-25 02:52:47.099835 | orchestrator | Wednesday 25 March 2026 02:52:43 +0000 (0:00:00.163) 0:00:20.877 ******* 2026-03-25 02:52:47.099839 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099843 | orchestrator | 2026-03-25 02:52:47.099847 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-25 02:52:47.099850 | orchestrator | Wednesday 25 March 2026 02:52:43 +0000 (0:00:00.179) 0:00:21.056 ******* 2026-03-25 02:52:47.099854 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099858 | orchestrator | 2026-03-25 02:52:47.099862 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-25 02:52:47.099866 | orchestrator | Wednesday 25 March 2026 02:52:43 +0000 (0:00:00.437) 0:00:21.493 ******* 2026-03-25 02:52:47.099869 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099873 | orchestrator | 2026-03-25 02:52:47.099877 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-25 02:52:47.099880 | orchestrator | Wednesday 25 March 2026 02:52:44 +0000 (0:00:00.164) 0:00:21.657 ******* 2026-03-25 02:52:47.099884 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099888 | orchestrator | 2026-03-25 02:52:47.099892 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-25 02:52:47.099895 | orchestrator | Wednesday 25 March 2026 02:52:44 +0000 (0:00:00.150) 0:00:21.808 ******* 2026-03-25 02:52:47.099899 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099903 | orchestrator | 2026-03-25 02:52:47.099906 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-25 02:52:47.099910 | orchestrator | Wednesday 25 March 2026 02:52:44 +0000 (0:00:00.160) 0:00:21.969 ******* 2026-03-25 02:52:47.099914 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099917 | orchestrator | 2026-03-25 02:52:47.099921 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-25 02:52:47.099925 | orchestrator | Wednesday 25 March 2026 02:52:44 +0000 (0:00:00.141) 0:00:22.110 ******* 2026-03-25 02:52:47.099929 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099932 | orchestrator | 2026-03-25 02:52:47.099936 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-25 02:52:47.099940 | orchestrator | Wednesday 25 March 2026 02:52:44 +0000 (0:00:00.132) 0:00:22.242 ******* 2026-03-25 02:52:47.099955 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099960 | orchestrator | 2026-03-25 02:52:47.099964 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-25 02:52:47.099968 | orchestrator | Wednesday 25 March 2026 02:52:44 +0000 (0:00:00.151) 0:00:22.394 ******* 2026-03-25 02:52:47.099972 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099977 | orchestrator | 2026-03-25 02:52:47.099981 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-25 02:52:47.099986 | orchestrator | Wednesday 25 March 2026 02:52:44 +0000 (0:00:00.168) 0:00:22.562 ******* 2026-03-25 02:52:47.099990 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.099994 | orchestrator | 2026-03-25 02:52:47.099998 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-25 02:52:47.100003 | orchestrator | Wednesday 25 March 2026 02:52:45 +0000 (0:00:00.165) 0:00:22.728 ******* 2026-03-25 02:52:47.100012 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.100016 | orchestrator | 2026-03-25 02:52:47.100020 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-25 02:52:47.100025 | orchestrator | Wednesday 25 March 2026 02:52:45 +0000 (0:00:00.157) 0:00:22.885 ******* 2026-03-25 02:52:47.100029 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.100033 | orchestrator | 2026-03-25 02:52:47.100040 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-25 02:52:47.100044 | orchestrator | Wednesday 25 March 2026 02:52:45 +0000 (0:00:00.163) 0:00:23.049 ******* 2026-03-25 02:52:47.100049 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.100053 | orchestrator | 2026-03-25 02:52:47.100057 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-25 02:52:47.100062 | orchestrator | Wednesday 25 March 2026 02:52:45 +0000 (0:00:00.142) 0:00:23.191 ******* 2026-03-25 02:52:47.100066 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.100070 | orchestrator | 2026-03-25 02:52:47.100074 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-25 02:52:47.100079 | orchestrator | Wednesday 25 March 2026 02:52:46 +0000 (0:00:00.405) 0:00:23.596 ******* 2026-03-25 02:52:47.100084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:47.100090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:47.100095 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.100099 | orchestrator | 2026-03-25 02:52:47.100103 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-25 02:52:47.100107 | orchestrator | Wednesday 25 March 2026 02:52:46 +0000 (0:00:00.183) 0:00:23.780 ******* 2026-03-25 02:52:47.100113 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:47.100119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:47.100125 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.100131 | orchestrator | 2026-03-25 02:52:47.100137 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-25 02:52:47.100143 | orchestrator | Wednesday 25 March 2026 02:52:46 +0000 (0:00:00.174) 0:00:23.954 ******* 2026-03-25 02:52:47.100149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:47.100156 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:47.100162 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.100168 | orchestrator | 2026-03-25 02:52:47.100174 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-25 02:52:47.100180 | orchestrator | Wednesday 25 March 2026 02:52:46 +0000 (0:00:00.183) 0:00:24.138 ******* 2026-03-25 02:52:47.100186 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:47.100192 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:47.100198 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.100205 | orchestrator | 2026-03-25 02:52:47.100211 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-25 02:52:47.100217 | orchestrator | Wednesday 25 March 2026 02:52:46 +0000 (0:00:00.159) 0:00:24.298 ******* 2026-03-25 02:52:47.100230 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:47.100236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:47.100241 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:47.100248 | orchestrator | 2026-03-25 02:52:47.100252 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-25 02:52:47.100256 | orchestrator | Wednesday 25 March 2026 02:52:46 +0000 (0:00:00.180) 0:00:24.478 ******* 2026-03-25 02:52:47.100264 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:53.055581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:53.055681 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:53.055696 | orchestrator | 2026-03-25 02:52:53.055705 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-25 02:52:53.055717 | orchestrator | Wednesday 25 March 2026 02:52:47 +0000 (0:00:00.195) 0:00:24.673 ******* 2026-03-25 02:52:53.055726 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:53.055736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:53.055745 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:53.055755 | orchestrator | 2026-03-25 02:52:53.055780 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-25 02:52:53.055789 | orchestrator | Wednesday 25 March 2026 02:52:47 +0000 (0:00:00.169) 0:00:24.842 ******* 2026-03-25 02:52:53.055799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:53.055808 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:53.055818 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:53.055839 | orchestrator | 2026-03-25 02:52:53.055857 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-25 02:52:53.055866 | orchestrator | Wednesday 25 March 2026 02:52:47 +0000 (0:00:00.162) 0:00:25.005 ******* 2026-03-25 02:52:53.055876 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:52:53.055887 | orchestrator | 2026-03-25 02:52:53.055896 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-25 02:52:53.055905 | orchestrator | Wednesday 25 March 2026 02:52:47 +0000 (0:00:00.522) 0:00:25.527 ******* 2026-03-25 02:52:53.055914 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:52:53.055924 | orchestrator | 2026-03-25 02:52:53.055933 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-25 02:52:53.055942 | orchestrator | Wednesday 25 March 2026 02:52:48 +0000 (0:00:00.541) 0:00:26.069 ******* 2026-03-25 02:52:53.055951 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:52:53.055960 | orchestrator | 2026-03-25 02:52:53.055968 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-25 02:52:53.055978 | orchestrator | Wednesday 25 March 2026 02:52:48 +0000 (0:00:00.180) 0:00:26.249 ******* 2026-03-25 02:52:53.055986 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'vg_name': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'}) 2026-03-25 02:52:53.055996 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'vg_name': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'}) 2026-03-25 02:52:53.056031 | orchestrator | 2026-03-25 02:52:53.056041 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-25 02:52:53.056051 | orchestrator | Wednesday 25 March 2026 02:52:48 +0000 (0:00:00.190) 0:00:26.439 ******* 2026-03-25 02:52:53.056060 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:53.056069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:53.056078 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:53.056086 | orchestrator | 2026-03-25 02:52:53.056091 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-25 02:52:53.056096 | orchestrator | Wednesday 25 March 2026 02:52:49 +0000 (0:00:00.443) 0:00:26.883 ******* 2026-03-25 02:52:53.056102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:53.056108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:53.056113 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:53.056119 | orchestrator | 2026-03-25 02:52:53.056126 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-25 02:52:53.056132 | orchestrator | Wednesday 25 March 2026 02:52:49 +0000 (0:00:00.171) 0:00:27.054 ******* 2026-03-25 02:52:53.056139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 02:52:53.056145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 02:52:53.056152 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:52:53.056158 | orchestrator | 2026-03-25 02:52:53.056164 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-25 02:52:53.056170 | orchestrator | Wednesday 25 March 2026 02:52:49 +0000 (0:00:00.174) 0:00:27.229 ******* 2026-03-25 02:52:53.056191 | orchestrator | ok: [testbed-node-3] => { 2026-03-25 02:52:53.056197 | orchestrator |  "lvm_report": { 2026-03-25 02:52:53.056204 | orchestrator |  "lv": [ 2026-03-25 02:52:53.056210 | orchestrator |  { 2026-03-25 02:52:53.056216 | orchestrator |  "lv_name": "osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e", 2026-03-25 02:52:53.056223 | orchestrator |  "vg_name": "ceph-2eb637af-fcba-56ed-b416-856a8f376a6e" 2026-03-25 02:52:53.056230 | orchestrator |  }, 2026-03-25 02:52:53.056236 | orchestrator |  { 2026-03-25 02:52:53.056242 | orchestrator |  "lv_name": "osd-block-a7f517e2-016b-5c10-ac21-20c48339115f", 2026-03-25 02:52:53.056248 | orchestrator |  "vg_name": "ceph-a7f517e2-016b-5c10-ac21-20c48339115f" 2026-03-25 02:52:53.056255 | orchestrator |  } 2026-03-25 02:52:53.056261 | orchestrator |  ], 2026-03-25 02:52:53.056267 | orchestrator |  "pv": [ 2026-03-25 02:52:53.056273 | orchestrator |  { 2026-03-25 02:52:53.056279 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-25 02:52:53.056285 | orchestrator |  "vg_name": "ceph-a7f517e2-016b-5c10-ac21-20c48339115f" 2026-03-25 02:52:53.056292 | orchestrator |  }, 2026-03-25 02:52:53.056298 | orchestrator |  { 2026-03-25 02:52:53.056308 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-25 02:52:53.056315 | orchestrator |  "vg_name": "ceph-2eb637af-fcba-56ed-b416-856a8f376a6e" 2026-03-25 02:52:53.056321 | orchestrator |  } 2026-03-25 02:52:53.056327 | orchestrator |  ] 2026-03-25 02:52:53.056362 | orchestrator |  } 2026-03-25 02:52:53.056374 | orchestrator | } 2026-03-25 02:52:53.056386 | orchestrator | 2026-03-25 02:52:53.056393 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-25 02:52:53.056399 | orchestrator | 2026-03-25 02:52:53.056406 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-25 02:52:53.056413 | orchestrator | Wednesday 25 March 2026 02:52:50 +0000 (0:00:00.352) 0:00:27.581 ******* 2026-03-25 02:52:53.056419 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-25 02:52:53.056426 | orchestrator | 2026-03-25 02:52:53.056431 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-25 02:52:53.056437 | orchestrator | Wednesday 25 March 2026 02:52:50 +0000 (0:00:00.318) 0:00:27.899 ******* 2026-03-25 02:52:53.056442 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:52:53.056448 | orchestrator | 2026-03-25 02:52:53.056453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:53.056459 | orchestrator | Wednesday 25 March 2026 02:52:50 +0000 (0:00:00.274) 0:00:28.173 ******* 2026-03-25 02:52:53.056464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-25 02:52:53.056470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-25 02:52:53.056475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-25 02:52:53.056480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-25 02:52:53.056486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-25 02:52:53.056491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-25 02:52:53.056497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-25 02:52:53.056502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-25 02:52:53.056508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-25 02:52:53.056513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-25 02:52:53.056519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-25 02:52:53.056524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-25 02:52:53.056529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-25 02:52:53.056535 | orchestrator | 2026-03-25 02:52:53.056540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:53.056546 | orchestrator | Wednesday 25 March 2026 02:52:51 +0000 (0:00:00.490) 0:00:28.664 ******* 2026-03-25 02:52:53.056551 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:52:53.056556 | orchestrator | 2026-03-25 02:52:53.056562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:53.056567 | orchestrator | Wednesday 25 March 2026 02:52:51 +0000 (0:00:00.254) 0:00:28.918 ******* 2026-03-25 02:52:53.056573 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:52:53.056578 | orchestrator | 2026-03-25 02:52:53.056583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:53.056589 | orchestrator | Wednesday 25 March 2026 02:52:52 +0000 (0:00:00.769) 0:00:29.688 ******* 2026-03-25 02:52:53.056594 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:52:53.056599 | orchestrator | 2026-03-25 02:52:53.056604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:53.056609 | orchestrator | Wednesday 25 March 2026 02:52:52 +0000 (0:00:00.238) 0:00:29.927 ******* 2026-03-25 02:52:53.056613 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:52:53.056618 | orchestrator | 2026-03-25 02:52:53.056623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:53.056628 | orchestrator | Wednesday 25 March 2026 02:52:52 +0000 (0:00:00.247) 0:00:30.175 ******* 2026-03-25 02:52:53.056636 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:52:53.056641 | orchestrator | 2026-03-25 02:52:53.056646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:52:53.056651 | orchestrator | Wednesday 25 March 2026 02:52:52 +0000 (0:00:00.227) 0:00:30.403 ******* 2026-03-25 02:52:53.056656 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:52:53.056661 | orchestrator | 2026-03-25 02:52:53.056670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:05.263028 | orchestrator | Wednesday 25 March 2026 02:52:53 +0000 (0:00:00.225) 0:00:30.629 ******* 2026-03-25 02:53:05.263123 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263129 | orchestrator | 2026-03-25 02:53:05.263134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:05.263138 | orchestrator | Wednesday 25 March 2026 02:52:53 +0000 (0:00:00.223) 0:00:30.852 ******* 2026-03-25 02:53:05.263142 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263146 | orchestrator | 2026-03-25 02:53:05.263151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:05.263155 | orchestrator | Wednesday 25 March 2026 02:52:53 +0000 (0:00:00.253) 0:00:31.105 ******* 2026-03-25 02:53:05.263158 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529) 2026-03-25 02:53:05.263163 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529) 2026-03-25 02:53:05.263167 | orchestrator | 2026-03-25 02:53:05.263184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:05.263188 | orchestrator | Wednesday 25 March 2026 02:52:54 +0000 (0:00:00.520) 0:00:31.626 ******* 2026-03-25 02:53:05.263191 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f) 2026-03-25 02:53:05.263195 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f) 2026-03-25 02:53:05.263199 | orchestrator | 2026-03-25 02:53:05.263203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:05.263206 | orchestrator | Wednesday 25 March 2026 02:52:54 +0000 (0:00:00.486) 0:00:32.112 ******* 2026-03-25 02:53:05.263210 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347) 2026-03-25 02:53:05.263214 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347) 2026-03-25 02:53:05.263218 | orchestrator | 2026-03-25 02:53:05.263221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:05.263225 | orchestrator | Wednesday 25 March 2026 02:52:55 +0000 (0:00:00.791) 0:00:32.903 ******* 2026-03-25 02:53:05.263229 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11) 2026-03-25 02:53:05.263233 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11) 2026-03-25 02:53:05.263237 | orchestrator | 2026-03-25 02:53:05.263241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:05.263244 | orchestrator | Wednesday 25 March 2026 02:52:56 +0000 (0:00:01.077) 0:00:33.981 ******* 2026-03-25 02:53:05.263248 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-25 02:53:05.263252 | orchestrator | 2026-03-25 02:53:05.263256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263259 | orchestrator | Wednesday 25 March 2026 02:52:56 +0000 (0:00:00.408) 0:00:34.389 ******* 2026-03-25 02:53:05.263263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-25 02:53:05.263268 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-25 02:53:05.263272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-25 02:53:05.263291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-25 02:53:05.263295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-25 02:53:05.263298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-25 02:53:05.263302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-25 02:53:05.263306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-25 02:53:05.263310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-25 02:53:05.263313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-25 02:53:05.263317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-25 02:53:05.263321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-25 02:53:05.263324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-25 02:53:05.263328 | orchestrator | 2026-03-25 02:53:05.263332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263336 | orchestrator | Wednesday 25 March 2026 02:52:57 +0000 (0:00:00.481) 0:00:34.871 ******* 2026-03-25 02:53:05.263339 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263343 | orchestrator | 2026-03-25 02:53:05.263395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263400 | orchestrator | Wednesday 25 March 2026 02:52:57 +0000 (0:00:00.257) 0:00:35.129 ******* 2026-03-25 02:53:05.263404 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263408 | orchestrator | 2026-03-25 02:53:05.263412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263415 | orchestrator | Wednesday 25 March 2026 02:52:57 +0000 (0:00:00.231) 0:00:35.360 ******* 2026-03-25 02:53:05.263419 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263423 | orchestrator | 2026-03-25 02:53:05.263437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263442 | orchestrator | Wednesday 25 March 2026 02:52:58 +0000 (0:00:00.254) 0:00:35.615 ******* 2026-03-25 02:53:05.263445 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263449 | orchestrator | 2026-03-25 02:53:05.263453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263457 | orchestrator | Wednesday 25 March 2026 02:52:58 +0000 (0:00:00.235) 0:00:35.850 ******* 2026-03-25 02:53:05.263460 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263464 | orchestrator | 2026-03-25 02:53:05.263468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263472 | orchestrator | Wednesday 25 March 2026 02:52:58 +0000 (0:00:00.228) 0:00:36.079 ******* 2026-03-25 02:53:05.263476 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263480 | orchestrator | 2026-03-25 02:53:05.263484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263487 | orchestrator | Wednesday 25 March 2026 02:52:58 +0000 (0:00:00.239) 0:00:36.318 ******* 2026-03-25 02:53:05.263494 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263498 | orchestrator | 2026-03-25 02:53:05.263502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263506 | orchestrator | Wednesday 25 March 2026 02:52:58 +0000 (0:00:00.235) 0:00:36.554 ******* 2026-03-25 02:53:05.263509 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263513 | orchestrator | 2026-03-25 02:53:05.263517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263521 | orchestrator | Wednesday 25 March 2026 02:52:59 +0000 (0:00:00.770) 0:00:37.324 ******* 2026-03-25 02:53:05.263525 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-25 02:53:05.263533 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-25 02:53:05.263537 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-25 02:53:05.263541 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-25 02:53:05.263545 | orchestrator | 2026-03-25 02:53:05.263548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263552 | orchestrator | Wednesday 25 March 2026 02:53:00 +0000 (0:00:00.790) 0:00:38.114 ******* 2026-03-25 02:53:05.263556 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263560 | orchestrator | 2026-03-25 02:53:05.263563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263567 | orchestrator | Wednesday 25 March 2026 02:53:00 +0000 (0:00:00.236) 0:00:38.351 ******* 2026-03-25 02:53:05.263571 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263575 | orchestrator | 2026-03-25 02:53:05.263578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263582 | orchestrator | Wednesday 25 March 2026 02:53:01 +0000 (0:00:00.266) 0:00:38.617 ******* 2026-03-25 02:53:05.263587 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263591 | orchestrator | 2026-03-25 02:53:05.263595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:05.263599 | orchestrator | Wednesday 25 March 2026 02:53:01 +0000 (0:00:00.232) 0:00:38.849 ******* 2026-03-25 02:53:05.263604 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263608 | orchestrator | 2026-03-25 02:53:05.263613 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-25 02:53:05.263617 | orchestrator | Wednesday 25 March 2026 02:53:01 +0000 (0:00:00.239) 0:00:39.089 ******* 2026-03-25 02:53:05.263621 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263626 | orchestrator | 2026-03-25 02:53:05.263630 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-25 02:53:05.263635 | orchestrator | Wednesday 25 March 2026 02:53:01 +0000 (0:00:00.172) 0:00:39.261 ******* 2026-03-25 02:53:05.263639 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '82366886-ea97-5dba-b5cd-187414e0593f'}}) 2026-03-25 02:53:05.263644 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fa1f2bca-96f4-5f59-9dac-c3efdd146138'}}) 2026-03-25 02:53:05.263649 | orchestrator | 2026-03-25 02:53:05.263653 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-25 02:53:05.263657 | orchestrator | Wednesday 25 March 2026 02:53:01 +0000 (0:00:00.217) 0:00:39.478 ******* 2026-03-25 02:53:05.263663 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'}) 2026-03-25 02:53:05.263669 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'}) 2026-03-25 02:53:05.263673 | orchestrator | 2026-03-25 02:53:05.263677 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-25 02:53:05.263682 | orchestrator | Wednesday 25 March 2026 02:53:03 +0000 (0:00:01.815) 0:00:41.294 ******* 2026-03-25 02:53:05.263686 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:05.263692 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:05.263697 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:05.263701 | orchestrator | 2026-03-25 02:53:05.263705 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-25 02:53:05.263710 | orchestrator | Wednesday 25 March 2026 02:53:03 +0000 (0:00:00.193) 0:00:41.488 ******* 2026-03-25 02:53:05.263714 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'}) 2026-03-25 02:53:05.263725 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'}) 2026-03-25 02:53:11.689902 | orchestrator | 2026-03-25 02:53:11.690003 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-25 02:53:11.690088 | orchestrator | Wednesday 25 March 2026 02:53:05 +0000 (0:00:01.343) 0:00:42.831 ******* 2026-03-25 02:53:11.690102 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:11.690114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:11.690124 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690135 | orchestrator | 2026-03-25 02:53:11.690162 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-25 02:53:11.690172 | orchestrator | Wednesday 25 March 2026 02:53:05 +0000 (0:00:00.417) 0:00:43.249 ******* 2026-03-25 02:53:11.690182 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690192 | orchestrator | 2026-03-25 02:53:11.690201 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-25 02:53:11.690211 | orchestrator | Wednesday 25 March 2026 02:53:05 +0000 (0:00:00.156) 0:00:43.405 ******* 2026-03-25 02:53:11.690221 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:11.690231 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:11.690240 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690250 | orchestrator | 2026-03-25 02:53:11.690260 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-25 02:53:11.690269 | orchestrator | Wednesday 25 March 2026 02:53:06 +0000 (0:00:00.193) 0:00:43.599 ******* 2026-03-25 02:53:11.690279 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690288 | orchestrator | 2026-03-25 02:53:11.690298 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-25 02:53:11.690307 | orchestrator | Wednesday 25 March 2026 02:53:06 +0000 (0:00:00.155) 0:00:43.755 ******* 2026-03-25 02:53:11.690317 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:11.690327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:11.690336 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690347 | orchestrator | 2026-03-25 02:53:11.690446 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-25 02:53:11.690461 | orchestrator | Wednesday 25 March 2026 02:53:06 +0000 (0:00:00.188) 0:00:43.943 ******* 2026-03-25 02:53:11.690472 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690484 | orchestrator | 2026-03-25 02:53:11.690495 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-25 02:53:11.690505 | orchestrator | Wednesday 25 March 2026 02:53:06 +0000 (0:00:00.149) 0:00:44.093 ******* 2026-03-25 02:53:11.690516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:11.690527 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:11.690538 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690548 | orchestrator | 2026-03-25 02:53:11.690559 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-25 02:53:11.690618 | orchestrator | Wednesday 25 March 2026 02:53:06 +0000 (0:00:00.171) 0:00:44.264 ******* 2026-03-25 02:53:11.690630 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:53:11.690642 | orchestrator | 2026-03-25 02:53:11.690653 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-25 02:53:11.690663 | orchestrator | Wednesday 25 March 2026 02:53:06 +0000 (0:00:00.151) 0:00:44.416 ******* 2026-03-25 02:53:11.690673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:11.690682 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:11.690692 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690701 | orchestrator | 2026-03-25 02:53:11.690711 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-25 02:53:11.690721 | orchestrator | Wednesday 25 March 2026 02:53:07 +0000 (0:00:00.182) 0:00:44.599 ******* 2026-03-25 02:53:11.690730 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:11.690740 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:11.690749 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690759 | orchestrator | 2026-03-25 02:53:11.690769 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-25 02:53:11.690797 | orchestrator | Wednesday 25 March 2026 02:53:07 +0000 (0:00:00.204) 0:00:44.803 ******* 2026-03-25 02:53:11.690808 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:11.690818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:11.690827 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690837 | orchestrator | 2026-03-25 02:53:11.690847 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-25 02:53:11.690856 | orchestrator | Wednesday 25 March 2026 02:53:07 +0000 (0:00:00.177) 0:00:44.980 ******* 2026-03-25 02:53:11.690872 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690882 | orchestrator | 2026-03-25 02:53:11.690891 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-25 02:53:11.690901 | orchestrator | Wednesday 25 March 2026 02:53:07 +0000 (0:00:00.395) 0:00:45.376 ******* 2026-03-25 02:53:11.690910 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690920 | orchestrator | 2026-03-25 02:53:11.690929 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-25 02:53:11.690939 | orchestrator | Wednesday 25 March 2026 02:53:07 +0000 (0:00:00.141) 0:00:45.517 ******* 2026-03-25 02:53:11.690948 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.690957 | orchestrator | 2026-03-25 02:53:11.690967 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-25 02:53:11.690976 | orchestrator | Wednesday 25 March 2026 02:53:08 +0000 (0:00:00.152) 0:00:45.670 ******* 2026-03-25 02:53:11.690986 | orchestrator | ok: [testbed-node-4] => { 2026-03-25 02:53:11.690996 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-25 02:53:11.691005 | orchestrator | } 2026-03-25 02:53:11.691015 | orchestrator | 2026-03-25 02:53:11.691025 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-25 02:53:11.691035 | orchestrator | Wednesday 25 March 2026 02:53:08 +0000 (0:00:00.168) 0:00:45.839 ******* 2026-03-25 02:53:11.691044 | orchestrator | ok: [testbed-node-4] => { 2026-03-25 02:53:11.691054 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-25 02:53:11.691070 | orchestrator | } 2026-03-25 02:53:11.691080 | orchestrator | 2026-03-25 02:53:11.691090 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-25 02:53:11.691099 | orchestrator | Wednesday 25 March 2026 02:53:08 +0000 (0:00:00.152) 0:00:45.992 ******* 2026-03-25 02:53:11.691109 | orchestrator | ok: [testbed-node-4] => { 2026-03-25 02:53:11.691118 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-25 02:53:11.691128 | orchestrator | } 2026-03-25 02:53:11.691138 | orchestrator | 2026-03-25 02:53:11.691147 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-25 02:53:11.691157 | orchestrator | Wednesday 25 March 2026 02:53:08 +0000 (0:00:00.187) 0:00:46.179 ******* 2026-03-25 02:53:11.691166 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:53:11.691176 | orchestrator | 2026-03-25 02:53:11.691185 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-25 02:53:11.691194 | orchestrator | Wednesday 25 March 2026 02:53:09 +0000 (0:00:00.547) 0:00:46.726 ******* 2026-03-25 02:53:11.691204 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:53:11.691213 | orchestrator | 2026-03-25 02:53:11.691230 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-25 02:53:11.691246 | orchestrator | Wednesday 25 March 2026 02:53:09 +0000 (0:00:00.516) 0:00:47.243 ******* 2026-03-25 02:53:11.691261 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:53:11.691276 | orchestrator | 2026-03-25 02:53:11.691290 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-25 02:53:11.691305 | orchestrator | Wednesday 25 March 2026 02:53:10 +0000 (0:00:00.531) 0:00:47.775 ******* 2026-03-25 02:53:11.691320 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:53:11.691336 | orchestrator | 2026-03-25 02:53:11.691351 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-25 02:53:11.691391 | orchestrator | Wednesday 25 March 2026 02:53:10 +0000 (0:00:00.180) 0:00:47.955 ******* 2026-03-25 02:53:11.691406 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.691420 | orchestrator | 2026-03-25 02:53:11.691435 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-25 02:53:11.691450 | orchestrator | Wednesday 25 March 2026 02:53:10 +0000 (0:00:00.132) 0:00:48.087 ******* 2026-03-25 02:53:11.691465 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.691480 | orchestrator | 2026-03-25 02:53:11.691495 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-25 02:53:11.691511 | orchestrator | Wednesday 25 March 2026 02:53:10 +0000 (0:00:00.363) 0:00:48.451 ******* 2026-03-25 02:53:11.691526 | orchestrator | ok: [testbed-node-4] => { 2026-03-25 02:53:11.691541 | orchestrator |  "vgs_report": { 2026-03-25 02:53:11.691557 | orchestrator |  "vg": [] 2026-03-25 02:53:11.691573 | orchestrator |  } 2026-03-25 02:53:11.691590 | orchestrator | } 2026-03-25 02:53:11.691605 | orchestrator | 2026-03-25 02:53:11.691621 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-25 02:53:11.691637 | orchestrator | Wednesday 25 March 2026 02:53:11 +0000 (0:00:00.163) 0:00:48.615 ******* 2026-03-25 02:53:11.691652 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.691669 | orchestrator | 2026-03-25 02:53:11.691686 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-25 02:53:11.691702 | orchestrator | Wednesday 25 March 2026 02:53:11 +0000 (0:00:00.166) 0:00:48.781 ******* 2026-03-25 02:53:11.691718 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.691734 | orchestrator | 2026-03-25 02:53:11.691779 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-25 02:53:11.691796 | orchestrator | Wednesday 25 March 2026 02:53:11 +0000 (0:00:00.165) 0:00:48.947 ******* 2026-03-25 02:53:11.691812 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.691827 | orchestrator | 2026-03-25 02:53:11.691844 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-25 02:53:11.691860 | orchestrator | Wednesday 25 March 2026 02:53:11 +0000 (0:00:00.157) 0:00:49.105 ******* 2026-03-25 02:53:11.691890 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:11.691907 | orchestrator | 2026-03-25 02:53:11.691938 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-25 02:53:16.949918 | orchestrator | Wednesday 25 March 2026 02:53:11 +0000 (0:00:00.155) 0:00:49.260 ******* 2026-03-25 02:53:16.950006 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950193 | orchestrator | 2026-03-25 02:53:16.950202 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-25 02:53:16.950209 | orchestrator | Wednesday 25 March 2026 02:53:11 +0000 (0:00:00.145) 0:00:49.405 ******* 2026-03-25 02:53:16.950215 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950221 | orchestrator | 2026-03-25 02:53:16.950228 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-25 02:53:16.950234 | orchestrator | Wednesday 25 March 2026 02:53:11 +0000 (0:00:00.167) 0:00:49.573 ******* 2026-03-25 02:53:16.950241 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950247 | orchestrator | 2026-03-25 02:53:16.950268 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-25 02:53:16.950275 | orchestrator | Wednesday 25 March 2026 02:53:12 +0000 (0:00:00.151) 0:00:49.725 ******* 2026-03-25 02:53:16.950281 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950287 | orchestrator | 2026-03-25 02:53:16.950293 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-25 02:53:16.950299 | orchestrator | Wednesday 25 March 2026 02:53:12 +0000 (0:00:00.160) 0:00:49.885 ******* 2026-03-25 02:53:16.950305 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950311 | orchestrator | 2026-03-25 02:53:16.950317 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-25 02:53:16.950323 | orchestrator | Wednesday 25 March 2026 02:53:12 +0000 (0:00:00.169) 0:00:50.055 ******* 2026-03-25 02:53:16.950329 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950336 | orchestrator | 2026-03-25 02:53:16.950342 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-25 02:53:16.950348 | orchestrator | Wednesday 25 March 2026 02:53:12 +0000 (0:00:00.393) 0:00:50.448 ******* 2026-03-25 02:53:16.950354 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950361 | orchestrator | 2026-03-25 02:53:16.950391 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-25 02:53:16.950407 | orchestrator | Wednesday 25 March 2026 02:53:13 +0000 (0:00:00.144) 0:00:50.593 ******* 2026-03-25 02:53:16.950418 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950428 | orchestrator | 2026-03-25 02:53:16.950439 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-25 02:53:16.950450 | orchestrator | Wednesday 25 March 2026 02:53:13 +0000 (0:00:00.162) 0:00:50.755 ******* 2026-03-25 02:53:16.950461 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950470 | orchestrator | 2026-03-25 02:53:16.950480 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-25 02:53:16.950491 | orchestrator | Wednesday 25 March 2026 02:53:13 +0000 (0:00:00.156) 0:00:50.912 ******* 2026-03-25 02:53:16.950501 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950512 | orchestrator | 2026-03-25 02:53:16.950523 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-25 02:53:16.950534 | orchestrator | Wednesday 25 March 2026 02:53:13 +0000 (0:00:00.147) 0:00:51.059 ******* 2026-03-25 02:53:16.950546 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.950556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:16.950563 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950570 | orchestrator | 2026-03-25 02:53:16.950577 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-25 02:53:16.950605 | orchestrator | Wednesday 25 March 2026 02:53:13 +0000 (0:00:00.183) 0:00:51.242 ******* 2026-03-25 02:53:16.950614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.950625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:16.950636 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950645 | orchestrator | 2026-03-25 02:53:16.950655 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-25 02:53:16.950666 | orchestrator | Wednesday 25 March 2026 02:53:13 +0000 (0:00:00.184) 0:00:51.427 ******* 2026-03-25 02:53:16.950675 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.950685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:16.950694 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950704 | orchestrator | 2026-03-25 02:53:16.950713 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-25 02:53:16.950723 | orchestrator | Wednesday 25 March 2026 02:53:14 +0000 (0:00:00.194) 0:00:51.622 ******* 2026-03-25 02:53:16.950733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.950743 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:16.950751 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950761 | orchestrator | 2026-03-25 02:53:16.950792 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-25 02:53:16.950803 | orchestrator | Wednesday 25 March 2026 02:53:14 +0000 (0:00:00.187) 0:00:51.809 ******* 2026-03-25 02:53:16.950814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.950824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:16.950834 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950842 | orchestrator | 2026-03-25 02:53:16.950861 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-25 02:53:16.950870 | orchestrator | Wednesday 25 March 2026 02:53:14 +0000 (0:00:00.180) 0:00:51.990 ******* 2026-03-25 02:53:16.950879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.950889 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:16.950898 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950908 | orchestrator | 2026-03-25 02:53:16.950918 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-25 02:53:16.950929 | orchestrator | Wednesday 25 March 2026 02:53:14 +0000 (0:00:00.190) 0:00:52.180 ******* 2026-03-25 02:53:16.950939 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.950948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:16.950958 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.950979 | orchestrator | 2026-03-25 02:53:16.950990 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-25 02:53:16.951000 | orchestrator | Wednesday 25 March 2026 02:53:15 +0000 (0:00:00.432) 0:00:52.613 ******* 2026-03-25 02:53:16.951009 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.951017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:16.951026 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.951037 | orchestrator | 2026-03-25 02:53:16.951046 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-25 02:53:16.951055 | orchestrator | Wednesday 25 March 2026 02:53:15 +0000 (0:00:00.173) 0:00:52.787 ******* 2026-03-25 02:53:16.951064 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:53:16.951076 | orchestrator | 2026-03-25 02:53:16.951086 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-25 02:53:16.951096 | orchestrator | Wednesday 25 March 2026 02:53:15 +0000 (0:00:00.500) 0:00:53.288 ******* 2026-03-25 02:53:16.951107 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:53:16.951118 | orchestrator | 2026-03-25 02:53:16.951129 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-25 02:53:16.951139 | orchestrator | Wednesday 25 March 2026 02:53:16 +0000 (0:00:00.518) 0:00:53.806 ******* 2026-03-25 02:53:16.951149 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:53:16.951159 | orchestrator | 2026-03-25 02:53:16.951170 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-25 02:53:16.951180 | orchestrator | Wednesday 25 March 2026 02:53:16 +0000 (0:00:00.162) 0:00:53.969 ******* 2026-03-25 02:53:16.951190 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'vg_name': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'}) 2026-03-25 02:53:16.951202 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'vg_name': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'}) 2026-03-25 02:53:16.951214 | orchestrator | 2026-03-25 02:53:16.951225 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-25 02:53:16.951236 | orchestrator | Wednesday 25 March 2026 02:53:16 +0000 (0:00:00.212) 0:00:54.181 ******* 2026-03-25 02:53:16.951247 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.951258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:16.951270 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:16.951281 | orchestrator | 2026-03-25 02:53:16.951293 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-25 02:53:16.951305 | orchestrator | Wednesday 25 March 2026 02:53:16 +0000 (0:00:00.171) 0:00:54.353 ******* 2026-03-25 02:53:16.951317 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:16.951341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:24.383070 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:24.383153 | orchestrator | 2026-03-25 02:53:24.383162 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-25 02:53:24.383169 | orchestrator | Wednesday 25 March 2026 02:53:16 +0000 (0:00:00.170) 0:00:54.523 ******* 2026-03-25 02:53:24.383174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 02:53:24.383210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 02:53:24.383215 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:53:24.383220 | orchestrator | 2026-03-25 02:53:24.383225 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-25 02:53:24.383229 | orchestrator | Wednesday 25 March 2026 02:53:17 +0000 (0:00:00.172) 0:00:54.696 ******* 2026-03-25 02:53:24.383234 | orchestrator | ok: [testbed-node-4] => { 2026-03-25 02:53:24.383238 | orchestrator |  "lvm_report": { 2026-03-25 02:53:24.383244 | orchestrator |  "lv": [ 2026-03-25 02:53:24.383249 | orchestrator |  { 2026-03-25 02:53:24.383254 | orchestrator |  "lv_name": "osd-block-82366886-ea97-5dba-b5cd-187414e0593f", 2026-03-25 02:53:24.383259 | orchestrator |  "vg_name": "ceph-82366886-ea97-5dba-b5cd-187414e0593f" 2026-03-25 02:53:24.383264 | orchestrator |  }, 2026-03-25 02:53:24.383268 | orchestrator |  { 2026-03-25 02:53:24.383273 | orchestrator |  "lv_name": "osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138", 2026-03-25 02:53:24.383277 | orchestrator |  "vg_name": "ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138" 2026-03-25 02:53:24.383282 | orchestrator |  } 2026-03-25 02:53:24.383286 | orchestrator |  ], 2026-03-25 02:53:24.383291 | orchestrator |  "pv": [ 2026-03-25 02:53:24.383295 | orchestrator |  { 2026-03-25 02:53:24.383300 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-25 02:53:24.383304 | orchestrator |  "vg_name": "ceph-82366886-ea97-5dba-b5cd-187414e0593f" 2026-03-25 02:53:24.383309 | orchestrator |  }, 2026-03-25 02:53:24.383314 | orchestrator |  { 2026-03-25 02:53:24.383319 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-25 02:53:24.383323 | orchestrator |  "vg_name": "ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138" 2026-03-25 02:53:24.383328 | orchestrator |  } 2026-03-25 02:53:24.383332 | orchestrator |  ] 2026-03-25 02:53:24.383336 | orchestrator |  } 2026-03-25 02:53:24.383341 | orchestrator | } 2026-03-25 02:53:24.383346 | orchestrator | 2026-03-25 02:53:24.383351 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-25 02:53:24.383355 | orchestrator | 2026-03-25 02:53:24.383360 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-25 02:53:24.383364 | orchestrator | Wednesday 25 March 2026 02:53:17 +0000 (0:00:00.323) 0:00:55.020 ******* 2026-03-25 02:53:24.383369 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-25 02:53:24.383428 | orchestrator | 2026-03-25 02:53:24.383434 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-25 02:53:24.383439 | orchestrator | Wednesday 25 March 2026 02:53:18 +0000 (0:00:00.828) 0:00:55.848 ******* 2026-03-25 02:53:24.383444 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:53:24.383448 | orchestrator | 2026-03-25 02:53:24.383453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383457 | orchestrator | Wednesday 25 March 2026 02:53:18 +0000 (0:00:00.271) 0:00:56.119 ******* 2026-03-25 02:53:24.383462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-25 02:53:24.383467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-25 02:53:24.383471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-25 02:53:24.383476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-25 02:53:24.383480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-25 02:53:24.383485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-25 02:53:24.383489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-25 02:53:24.383500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-25 02:53:24.383505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-25 02:53:24.383509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-25 02:53:24.383514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-25 02:53:24.383518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-25 02:53:24.383523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-25 02:53:24.383527 | orchestrator | 2026-03-25 02:53:24.383531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383536 | orchestrator | Wednesday 25 March 2026 02:53:19 +0000 (0:00:00.501) 0:00:56.621 ******* 2026-03-25 02:53:24.383540 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:24.383545 | orchestrator | 2026-03-25 02:53:24.383549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383554 | orchestrator | Wednesday 25 March 2026 02:53:19 +0000 (0:00:00.247) 0:00:56.869 ******* 2026-03-25 02:53:24.383558 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:24.383562 | orchestrator | 2026-03-25 02:53:24.383567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383583 | orchestrator | Wednesday 25 March 2026 02:53:19 +0000 (0:00:00.231) 0:00:57.100 ******* 2026-03-25 02:53:24.383588 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:24.383592 | orchestrator | 2026-03-25 02:53:24.383597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383602 | orchestrator | Wednesday 25 March 2026 02:53:19 +0000 (0:00:00.230) 0:00:57.330 ******* 2026-03-25 02:53:24.383607 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:24.383612 | orchestrator | 2026-03-25 02:53:24.383617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383623 | orchestrator | Wednesday 25 March 2026 02:53:19 +0000 (0:00:00.237) 0:00:57.568 ******* 2026-03-25 02:53:24.383628 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:24.383633 | orchestrator | 2026-03-25 02:53:24.383638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383643 | orchestrator | Wednesday 25 March 2026 02:53:20 +0000 (0:00:00.233) 0:00:57.802 ******* 2026-03-25 02:53:24.383649 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:24.383654 | orchestrator | 2026-03-25 02:53:24.383659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383664 | orchestrator | Wednesday 25 March 2026 02:53:20 +0000 (0:00:00.235) 0:00:58.038 ******* 2026-03-25 02:53:24.383669 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:24.383674 | orchestrator | 2026-03-25 02:53:24.383679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383684 | orchestrator | Wednesday 25 March 2026 02:53:20 +0000 (0:00:00.232) 0:00:58.270 ******* 2026-03-25 02:53:24.383689 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:24.383694 | orchestrator | 2026-03-25 02:53:24.383700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383705 | orchestrator | Wednesday 25 March 2026 02:53:21 +0000 (0:00:00.762) 0:00:59.033 ******* 2026-03-25 02:53:24.383710 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2) 2026-03-25 02:53:24.383716 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2) 2026-03-25 02:53:24.383721 | orchestrator | 2026-03-25 02:53:24.383727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383732 | orchestrator | Wednesday 25 March 2026 02:53:21 +0000 (0:00:00.494) 0:00:59.528 ******* 2026-03-25 02:53:24.383790 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29) 2026-03-25 02:53:24.383804 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29) 2026-03-25 02:53:24.383810 | orchestrator | 2026-03-25 02:53:24.383815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383820 | orchestrator | Wednesday 25 March 2026 02:53:22 +0000 (0:00:00.495) 0:01:00.023 ******* 2026-03-25 02:53:24.383825 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7) 2026-03-25 02:53:24.383831 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7) 2026-03-25 02:53:24.383836 | orchestrator | 2026-03-25 02:53:24.383842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383846 | orchestrator | Wednesday 25 March 2026 02:53:22 +0000 (0:00:00.531) 0:01:00.555 ******* 2026-03-25 02:53:24.383851 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519) 2026-03-25 02:53:24.383855 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519) 2026-03-25 02:53:24.383860 | orchestrator | 2026-03-25 02:53:24.383865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-25 02:53:24.383869 | orchestrator | Wednesday 25 March 2026 02:53:23 +0000 (0:00:00.479) 0:01:01.035 ******* 2026-03-25 02:53:24.383874 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-25 02:53:24.383878 | orchestrator | 2026-03-25 02:53:24.383883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:24.383888 | orchestrator | Wednesday 25 March 2026 02:53:23 +0000 (0:00:00.389) 0:01:01.424 ******* 2026-03-25 02:53:24.383892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-25 02:53:24.383897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-25 02:53:24.383901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-25 02:53:24.383906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-25 02:53:24.383910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-25 02:53:24.383915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-25 02:53:24.383919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-25 02:53:24.383924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-25 02:53:24.383928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-25 02:53:24.383933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-25 02:53:24.383937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-25 02:53:24.383947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-25 02:53:34.019656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-25 02:53:34.019737 | orchestrator | 2026-03-25 02:53:34.019744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019750 | orchestrator | Wednesday 25 March 2026 02:53:24 +0000 (0:00:00.525) 0:01:01.950 ******* 2026-03-25 02:53:34.019754 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019759 | orchestrator | 2026-03-25 02:53:34.019763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019779 | orchestrator | Wednesday 25 March 2026 02:53:24 +0000 (0:00:00.228) 0:01:02.179 ******* 2026-03-25 02:53:34.019783 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019803 | orchestrator | 2026-03-25 02:53:34.019807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019811 | orchestrator | Wednesday 25 March 2026 02:53:24 +0000 (0:00:00.267) 0:01:02.446 ******* 2026-03-25 02:53:34.019815 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019819 | orchestrator | 2026-03-25 02:53:34.019823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019827 | orchestrator | Wednesday 25 March 2026 02:53:25 +0000 (0:00:00.250) 0:01:02.697 ******* 2026-03-25 02:53:34.019831 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019835 | orchestrator | 2026-03-25 02:53:34.019838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019842 | orchestrator | Wednesday 25 March 2026 02:53:25 +0000 (0:00:00.223) 0:01:02.921 ******* 2026-03-25 02:53:34.019846 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019850 | orchestrator | 2026-03-25 02:53:34.019854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019858 | orchestrator | Wednesday 25 March 2026 02:53:26 +0000 (0:00:00.760) 0:01:03.682 ******* 2026-03-25 02:53:34.019862 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019866 | orchestrator | 2026-03-25 02:53:34.019869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019873 | orchestrator | Wednesday 25 March 2026 02:53:26 +0000 (0:00:00.240) 0:01:03.922 ******* 2026-03-25 02:53:34.019877 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019881 | orchestrator | 2026-03-25 02:53:34.019885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019889 | orchestrator | Wednesday 25 March 2026 02:53:26 +0000 (0:00:00.240) 0:01:04.163 ******* 2026-03-25 02:53:34.019893 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019897 | orchestrator | 2026-03-25 02:53:34.019901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019905 | orchestrator | Wednesday 25 March 2026 02:53:26 +0000 (0:00:00.238) 0:01:04.401 ******* 2026-03-25 02:53:34.019909 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-25 02:53:34.019914 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-25 02:53:34.019918 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-25 02:53:34.019922 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-25 02:53:34.019926 | orchestrator | 2026-03-25 02:53:34.019930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019934 | orchestrator | Wednesday 25 March 2026 02:53:27 +0000 (0:00:00.763) 0:01:05.164 ******* 2026-03-25 02:53:34.019937 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019941 | orchestrator | 2026-03-25 02:53:34.019945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019949 | orchestrator | Wednesday 25 March 2026 02:53:27 +0000 (0:00:00.238) 0:01:05.402 ******* 2026-03-25 02:53:34.019953 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019957 | orchestrator | 2026-03-25 02:53:34.019961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019964 | orchestrator | Wednesday 25 March 2026 02:53:28 +0000 (0:00:00.230) 0:01:05.633 ******* 2026-03-25 02:53:34.019968 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019972 | orchestrator | 2026-03-25 02:53:34.019976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-25 02:53:34.019980 | orchestrator | Wednesday 25 March 2026 02:53:28 +0000 (0:00:00.246) 0:01:05.880 ******* 2026-03-25 02:53:34.019984 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.019988 | orchestrator | 2026-03-25 02:53:34.019992 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-25 02:53:34.019995 | orchestrator | Wednesday 25 March 2026 02:53:28 +0000 (0:00:00.231) 0:01:06.111 ******* 2026-03-25 02:53:34.019999 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.020003 | orchestrator | 2026-03-25 02:53:34.020011 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-25 02:53:34.020015 | orchestrator | Wednesday 25 March 2026 02:53:28 +0000 (0:00:00.155) 0:01:06.266 ******* 2026-03-25 02:53:34.020020 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f303e98e-56ea-50bc-9e1c-3ccda4672060'}}) 2026-03-25 02:53:34.020025 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ec576d5-4336-523a-896e-5358117b2269'}}) 2026-03-25 02:53:34.020029 | orchestrator | 2026-03-25 02:53:34.020033 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-25 02:53:34.020037 | orchestrator | Wednesday 25 March 2026 02:53:28 +0000 (0:00:00.218) 0:01:06.484 ******* 2026-03-25 02:53:34.020042 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'}) 2026-03-25 02:53:34.020047 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'}) 2026-03-25 02:53:34.020051 | orchestrator | 2026-03-25 02:53:34.020055 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-25 02:53:34.020070 | orchestrator | Wednesday 25 March 2026 02:53:30 +0000 (0:00:01.883) 0:01:08.368 ******* 2026-03-25 02:53:34.020074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:34.020079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:34.020083 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.020087 | orchestrator | 2026-03-25 02:53:34.020094 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-25 02:53:34.020098 | orchestrator | Wednesday 25 March 2026 02:53:31 +0000 (0:00:00.424) 0:01:08.792 ******* 2026-03-25 02:53:34.020102 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'}) 2026-03-25 02:53:34.020106 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'}) 2026-03-25 02:53:34.020110 | orchestrator | 2026-03-25 02:53:34.020114 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-25 02:53:34.020126 | orchestrator | Wednesday 25 March 2026 02:53:32 +0000 (0:00:01.334) 0:01:10.127 ******* 2026-03-25 02:53:34.020130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:34.020135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:34.020139 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.020142 | orchestrator | 2026-03-25 02:53:34.020146 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-25 02:53:34.020150 | orchestrator | Wednesday 25 March 2026 02:53:32 +0000 (0:00:00.179) 0:01:10.307 ******* 2026-03-25 02:53:34.020154 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.020158 | orchestrator | 2026-03-25 02:53:34.020162 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-25 02:53:34.020166 | orchestrator | Wednesday 25 March 2026 02:53:32 +0000 (0:00:00.148) 0:01:10.455 ******* 2026-03-25 02:53:34.020170 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:34.020174 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:34.020181 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.020185 | orchestrator | 2026-03-25 02:53:34.020189 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-25 02:53:34.020193 | orchestrator | Wednesday 25 March 2026 02:53:33 +0000 (0:00:00.173) 0:01:10.629 ******* 2026-03-25 02:53:34.020197 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.020201 | orchestrator | 2026-03-25 02:53:34.020205 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-25 02:53:34.020209 | orchestrator | Wednesday 25 March 2026 02:53:33 +0000 (0:00:00.138) 0:01:10.767 ******* 2026-03-25 02:53:34.020213 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:34.020217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:34.020221 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.020225 | orchestrator | 2026-03-25 02:53:34.020229 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-25 02:53:34.020233 | orchestrator | Wednesday 25 March 2026 02:53:33 +0000 (0:00:00.170) 0:01:10.938 ******* 2026-03-25 02:53:34.020236 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.020240 | orchestrator | 2026-03-25 02:53:34.020245 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-25 02:53:34.020251 | orchestrator | Wednesday 25 March 2026 02:53:33 +0000 (0:00:00.153) 0:01:11.092 ******* 2026-03-25 02:53:34.020258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:34.020264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:34.020271 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:34.020280 | orchestrator | 2026-03-25 02:53:34.020288 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-25 02:53:34.020294 | orchestrator | Wednesday 25 March 2026 02:53:33 +0000 (0:00:00.192) 0:01:11.284 ******* 2026-03-25 02:53:34.020300 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:53:34.020306 | orchestrator | 2026-03-25 02:53:34.020312 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-25 02:53:34.020319 | orchestrator | Wednesday 25 March 2026 02:53:33 +0000 (0:00:00.146) 0:01:11.431 ******* 2026-03-25 02:53:34.020330 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:41.051331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:41.051486 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.051499 | orchestrator | 2026-03-25 02:53:41.051506 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-25 02:53:41.051513 | orchestrator | Wednesday 25 March 2026 02:53:34 +0000 (0:00:00.163) 0:01:11.594 ******* 2026-03-25 02:53:41.051532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:41.051539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:41.051548 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.051557 | orchestrator | 2026-03-25 02:53:41.051564 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-25 02:53:41.051572 | orchestrator | Wednesday 25 March 2026 02:53:34 +0000 (0:00:00.162) 0:01:11.756 ******* 2026-03-25 02:53:41.051604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:41.051609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:41.051615 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.051620 | orchestrator | 2026-03-25 02:53:41.051625 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-25 02:53:41.051630 | orchestrator | Wednesday 25 March 2026 02:53:34 +0000 (0:00:00.487) 0:01:12.244 ******* 2026-03-25 02:53:41.051635 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.051640 | orchestrator | 2026-03-25 02:53:41.051645 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-25 02:53:41.051650 | orchestrator | Wednesday 25 March 2026 02:53:34 +0000 (0:00:00.163) 0:01:12.408 ******* 2026-03-25 02:53:41.051655 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.051661 | orchestrator | 2026-03-25 02:53:41.051667 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-25 02:53:41.051672 | orchestrator | Wednesday 25 March 2026 02:53:34 +0000 (0:00:00.156) 0:01:12.565 ******* 2026-03-25 02:53:41.051677 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.051682 | orchestrator | 2026-03-25 02:53:41.051687 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-25 02:53:41.051692 | orchestrator | Wednesday 25 March 2026 02:53:35 +0000 (0:00:00.151) 0:01:12.717 ******* 2026-03-25 02:53:41.051697 | orchestrator | ok: [testbed-node-5] => { 2026-03-25 02:53:41.051703 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-25 02:53:41.051708 | orchestrator | } 2026-03-25 02:53:41.051714 | orchestrator | 2026-03-25 02:53:41.051719 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-25 02:53:41.051724 | orchestrator | Wednesday 25 March 2026 02:53:35 +0000 (0:00:00.174) 0:01:12.891 ******* 2026-03-25 02:53:41.051729 | orchestrator | ok: [testbed-node-5] => { 2026-03-25 02:53:41.051734 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-25 02:53:41.051739 | orchestrator | } 2026-03-25 02:53:41.051744 | orchestrator | 2026-03-25 02:53:41.051749 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-25 02:53:41.051754 | orchestrator | Wednesday 25 March 2026 02:53:35 +0000 (0:00:00.148) 0:01:13.040 ******* 2026-03-25 02:53:41.051759 | orchestrator | ok: [testbed-node-5] => { 2026-03-25 02:53:41.051764 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-25 02:53:41.051770 | orchestrator | } 2026-03-25 02:53:41.051775 | orchestrator | 2026-03-25 02:53:41.051780 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-25 02:53:41.051785 | orchestrator | Wednesday 25 March 2026 02:53:35 +0000 (0:00:00.161) 0:01:13.202 ******* 2026-03-25 02:53:41.051790 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:53:41.051795 | orchestrator | 2026-03-25 02:53:41.051800 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-25 02:53:41.051805 | orchestrator | Wednesday 25 March 2026 02:53:36 +0000 (0:00:00.503) 0:01:13.705 ******* 2026-03-25 02:53:41.051810 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:53:41.051818 | orchestrator | 2026-03-25 02:53:41.051826 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-25 02:53:41.051834 | orchestrator | Wednesday 25 March 2026 02:53:36 +0000 (0:00:00.550) 0:01:14.256 ******* 2026-03-25 02:53:41.051843 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:53:41.051850 | orchestrator | 2026-03-25 02:53:41.051858 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-25 02:53:41.051866 | orchestrator | Wednesday 25 March 2026 02:53:37 +0000 (0:00:00.563) 0:01:14.819 ******* 2026-03-25 02:53:41.051874 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:53:41.051883 | orchestrator | 2026-03-25 02:53:41.051890 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-25 02:53:41.051915 | orchestrator | Wednesday 25 March 2026 02:53:37 +0000 (0:00:00.171) 0:01:14.990 ******* 2026-03-25 02:53:41.051923 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.051932 | orchestrator | 2026-03-25 02:53:41.051941 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-25 02:53:41.051950 | orchestrator | Wednesday 25 March 2026 02:53:37 +0000 (0:00:00.131) 0:01:15.122 ******* 2026-03-25 02:53:41.051958 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.051966 | orchestrator | 2026-03-25 02:53:41.051975 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-25 02:53:41.051983 | orchestrator | Wednesday 25 March 2026 02:53:37 +0000 (0:00:00.401) 0:01:15.523 ******* 2026-03-25 02:53:41.051991 | orchestrator | ok: [testbed-node-5] => { 2026-03-25 02:53:41.052000 | orchestrator |  "vgs_report": { 2026-03-25 02:53:41.052009 | orchestrator |  "vg": [] 2026-03-25 02:53:41.052038 | orchestrator |  } 2026-03-25 02:53:41.052047 | orchestrator | } 2026-03-25 02:53:41.052056 | orchestrator | 2026-03-25 02:53:41.052065 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-25 02:53:41.052073 | orchestrator | Wednesday 25 March 2026 02:53:38 +0000 (0:00:00.173) 0:01:15.697 ******* 2026-03-25 02:53:41.052082 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052090 | orchestrator | 2026-03-25 02:53:41.052096 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-25 02:53:41.052102 | orchestrator | Wednesday 25 March 2026 02:53:38 +0000 (0:00:00.150) 0:01:15.847 ******* 2026-03-25 02:53:41.052114 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052120 | orchestrator | 2026-03-25 02:53:41.052126 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-25 02:53:41.052132 | orchestrator | Wednesday 25 March 2026 02:53:38 +0000 (0:00:00.142) 0:01:15.990 ******* 2026-03-25 02:53:41.052138 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052144 | orchestrator | 2026-03-25 02:53:41.052152 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-25 02:53:41.052160 | orchestrator | Wednesday 25 March 2026 02:53:38 +0000 (0:00:00.152) 0:01:16.142 ******* 2026-03-25 02:53:41.052169 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052178 | orchestrator | 2026-03-25 02:53:41.052186 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-25 02:53:41.052195 | orchestrator | Wednesday 25 March 2026 02:53:38 +0000 (0:00:00.169) 0:01:16.312 ******* 2026-03-25 02:53:41.052204 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052213 | orchestrator | 2026-03-25 02:53:41.052222 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-25 02:53:41.052229 | orchestrator | Wednesday 25 March 2026 02:53:38 +0000 (0:00:00.148) 0:01:16.461 ******* 2026-03-25 02:53:41.052235 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052241 | orchestrator | 2026-03-25 02:53:41.052247 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-25 02:53:41.052252 | orchestrator | Wednesday 25 March 2026 02:53:39 +0000 (0:00:00.152) 0:01:16.614 ******* 2026-03-25 02:53:41.052257 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052262 | orchestrator | 2026-03-25 02:53:41.052267 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-25 02:53:41.052272 | orchestrator | Wednesday 25 March 2026 02:53:39 +0000 (0:00:00.137) 0:01:16.751 ******* 2026-03-25 02:53:41.052277 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052282 | orchestrator | 2026-03-25 02:53:41.052288 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-25 02:53:41.052294 | orchestrator | Wednesday 25 March 2026 02:53:39 +0000 (0:00:00.150) 0:01:16.902 ******* 2026-03-25 02:53:41.052299 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052305 | orchestrator | 2026-03-25 02:53:41.052311 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-25 02:53:41.052316 | orchestrator | Wednesday 25 March 2026 02:53:39 +0000 (0:00:00.140) 0:01:17.042 ******* 2026-03-25 02:53:41.052329 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052334 | orchestrator | 2026-03-25 02:53:41.052340 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-25 02:53:41.052346 | orchestrator | Wednesday 25 March 2026 02:53:39 +0000 (0:00:00.140) 0:01:17.182 ******* 2026-03-25 02:53:41.052352 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052358 | orchestrator | 2026-03-25 02:53:41.052368 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-25 02:53:41.052376 | orchestrator | Wednesday 25 March 2026 02:53:40 +0000 (0:00:00.410) 0:01:17.593 ******* 2026-03-25 02:53:41.052386 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052394 | orchestrator | 2026-03-25 02:53:41.052426 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-25 02:53:41.052434 | orchestrator | Wednesday 25 March 2026 02:53:40 +0000 (0:00:00.172) 0:01:17.765 ******* 2026-03-25 02:53:41.052444 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052453 | orchestrator | 2026-03-25 02:53:41.052462 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-25 02:53:41.052472 | orchestrator | Wednesday 25 March 2026 02:53:40 +0000 (0:00:00.159) 0:01:17.925 ******* 2026-03-25 02:53:41.052481 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052491 | orchestrator | 2026-03-25 02:53:41.052500 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-25 02:53:41.052509 | orchestrator | Wednesday 25 March 2026 02:53:40 +0000 (0:00:00.162) 0:01:18.088 ******* 2026-03-25 02:53:41.052519 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:41.052530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:41.052540 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052551 | orchestrator | 2026-03-25 02:53:41.052561 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-25 02:53:41.052569 | orchestrator | Wednesday 25 March 2026 02:53:40 +0000 (0:00:00.170) 0:01:18.258 ******* 2026-03-25 02:53:41.052589 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:41.052599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:41.052609 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:41.052616 | orchestrator | 2026-03-25 02:53:41.052622 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-25 02:53:41.052627 | orchestrator | Wednesday 25 March 2026 02:53:40 +0000 (0:00:00.174) 0:01:18.433 ******* 2026-03-25 02:53:41.052643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:44.318110 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:44.318197 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:44.318208 | orchestrator | 2026-03-25 02:53:44.318231 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-25 02:53:44.318240 | orchestrator | Wednesday 25 March 2026 02:53:41 +0000 (0:00:00.189) 0:01:18.623 ******* 2026-03-25 02:53:44.318246 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:44.318253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:44.318281 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:44.318289 | orchestrator | 2026-03-25 02:53:44.318295 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-25 02:53:44.318302 | orchestrator | Wednesday 25 March 2026 02:53:41 +0000 (0:00:00.155) 0:01:18.778 ******* 2026-03-25 02:53:44.318308 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:44.318315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:44.318321 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:44.318327 | orchestrator | 2026-03-25 02:53:44.318333 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-25 02:53:44.318339 | orchestrator | Wednesday 25 March 2026 02:53:41 +0000 (0:00:00.179) 0:01:18.958 ******* 2026-03-25 02:53:44.318345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:44.318351 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:44.318358 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:44.318364 | orchestrator | 2026-03-25 02:53:44.318370 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-25 02:53:44.318376 | orchestrator | Wednesday 25 March 2026 02:53:41 +0000 (0:00:00.169) 0:01:19.127 ******* 2026-03-25 02:53:44.318382 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:44.318389 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:44.318395 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:44.318415 | orchestrator | 2026-03-25 02:53:44.318422 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-25 02:53:44.318428 | orchestrator | Wednesday 25 March 2026 02:53:41 +0000 (0:00:00.161) 0:01:19.289 ******* 2026-03-25 02:53:44.318434 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:44.318440 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:44.318446 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:44.318453 | orchestrator | 2026-03-25 02:53:44.318459 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-25 02:53:44.318465 | orchestrator | Wednesday 25 March 2026 02:53:41 +0000 (0:00:00.198) 0:01:19.488 ******* 2026-03-25 02:53:44.318472 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:53:44.318478 | orchestrator | 2026-03-25 02:53:44.318485 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-25 02:53:44.318491 | orchestrator | Wednesday 25 March 2026 02:53:42 +0000 (0:00:00.782) 0:01:20.270 ******* 2026-03-25 02:53:44.318497 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:53:44.318503 | orchestrator | 2026-03-25 02:53:44.318509 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-25 02:53:44.318515 | orchestrator | Wednesday 25 March 2026 02:53:43 +0000 (0:00:00.525) 0:01:20.796 ******* 2026-03-25 02:53:44.318522 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:53:44.318528 | orchestrator | 2026-03-25 02:53:44.318534 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-25 02:53:44.318540 | orchestrator | Wednesday 25 March 2026 02:53:43 +0000 (0:00:00.152) 0:01:20.949 ******* 2026-03-25 02:53:44.318552 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'vg_name': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'}) 2026-03-25 02:53:44.318559 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'vg_name': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'}) 2026-03-25 02:53:44.318565 | orchestrator | 2026-03-25 02:53:44.318572 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-25 02:53:44.318578 | orchestrator | Wednesday 25 March 2026 02:53:43 +0000 (0:00:00.184) 0:01:21.134 ******* 2026-03-25 02:53:44.318598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:44.318608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:44.318614 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:44.318621 | orchestrator | 2026-03-25 02:53:44.318628 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-25 02:53:44.318643 | orchestrator | Wednesday 25 March 2026 02:53:43 +0000 (0:00:00.186) 0:01:21.320 ******* 2026-03-25 02:53:44.318649 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:44.318656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:44.318670 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:44.318677 | orchestrator | 2026-03-25 02:53:44.318683 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-25 02:53:44.318690 | orchestrator | Wednesday 25 March 2026 02:53:43 +0000 (0:00:00.191) 0:01:21.512 ******* 2026-03-25 02:53:44.318696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 02:53:44.318703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 02:53:44.318711 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:53:44.318718 | orchestrator | 2026-03-25 02:53:44.318725 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-25 02:53:44.318732 | orchestrator | Wednesday 25 March 2026 02:53:44 +0000 (0:00:00.169) 0:01:21.682 ******* 2026-03-25 02:53:44.318739 | orchestrator | ok: [testbed-node-5] => { 2026-03-25 02:53:44.318745 | orchestrator |  "lvm_report": { 2026-03-25 02:53:44.318752 | orchestrator |  "lv": [ 2026-03-25 02:53:44.318759 | orchestrator |  { 2026-03-25 02:53:44.318766 | orchestrator |  "lv_name": "osd-block-8ec576d5-4336-523a-896e-5358117b2269", 2026-03-25 02:53:44.318774 | orchestrator |  "vg_name": "ceph-8ec576d5-4336-523a-896e-5358117b2269" 2026-03-25 02:53:44.318781 | orchestrator |  }, 2026-03-25 02:53:44.318787 | orchestrator |  { 2026-03-25 02:53:44.318794 | orchestrator |  "lv_name": "osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060", 2026-03-25 02:53:44.318801 | orchestrator |  "vg_name": "ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060" 2026-03-25 02:53:44.318808 | orchestrator |  } 2026-03-25 02:53:44.318815 | orchestrator |  ], 2026-03-25 02:53:44.318822 | orchestrator |  "pv": [ 2026-03-25 02:53:44.318829 | orchestrator |  { 2026-03-25 02:53:44.318836 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-25 02:53:44.318843 | orchestrator |  "vg_name": "ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060" 2026-03-25 02:53:44.318849 | orchestrator |  }, 2026-03-25 02:53:44.318856 | orchestrator |  { 2026-03-25 02:53:44.318863 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-25 02:53:44.318878 | orchestrator |  "vg_name": "ceph-8ec576d5-4336-523a-896e-5358117b2269" 2026-03-25 02:53:44.318884 | orchestrator |  } 2026-03-25 02:53:44.318890 | orchestrator |  ] 2026-03-25 02:53:44.318896 | orchestrator |  } 2026-03-25 02:53:44.318902 | orchestrator | } 2026-03-25 02:53:44.318909 | orchestrator | 2026-03-25 02:53:44.318916 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:53:44.318922 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-25 02:53:44.318929 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-25 02:53:44.318935 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-25 02:53:44.318942 | orchestrator | 2026-03-25 02:53:44.318949 | orchestrator | 2026-03-25 02:53:44.318955 | orchestrator | 2026-03-25 02:53:44.318961 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:53:44.318967 | orchestrator | Wednesday 25 March 2026 02:53:44 +0000 (0:00:00.185) 0:01:21.867 ******* 2026-03-25 02:53:44.318970 | orchestrator | =============================================================================== 2026-03-25 02:53:44.318974 | orchestrator | Create block VGs -------------------------------------------------------- 5.85s 2026-03-25 02:53:44.318978 | orchestrator | Create block LVs -------------------------------------------------------- 4.12s 2026-03-25 02:53:44.318981 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.81s 2026-03-25 02:53:44.318985 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.68s 2026-03-25 02:53:44.318989 | orchestrator | Add known links to the list of available block devices ------------------ 1.60s 2026-03-25 02:53:44.318992 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.59s 2026-03-25 02:53:44.318996 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.58s 2026-03-25 02:53:44.319000 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2026-03-25 02:53:44.319008 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2026-03-25 02:53:44.771583 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.43s 2026-03-25 02:53:44.771702 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2026-03-25 02:53:44.771724 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-03-25 02:53:44.771764 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.90s 2026-03-25 02:53:44.771779 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2026-03-25 02:53:44.771788 | orchestrator | Count OSDs put on ceph_db_wal_devices defined in lvm_volumes ------------ 0.84s 2026-03-25 02:53:44.771796 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.80s 2026-03-25 02:53:44.771805 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-03-25 02:53:44.771813 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-03-25 02:53:44.771822 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-03-25 02:53:44.771831 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.78s 2026-03-25 02:53:57.563650 | orchestrator | 2026-03-25 02:53:57 | INFO  | Task e975d05e-88c3-491d-8a59-668b01501bc9 (facts) was prepared for execution. 2026-03-25 02:53:57.563805 | orchestrator | 2026-03-25 02:53:57 | INFO  | It takes a moment until task e975d05e-88c3-491d-8a59-668b01501bc9 (facts) has been started and output is visible here. 2026-03-25 02:54:12.525350 | orchestrator | 2026-03-25 02:54:12.525552 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-25 02:54:12.525601 | orchestrator | 2026-03-25 02:54:12.525615 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-25 02:54:12.525627 | orchestrator | Wednesday 25 March 2026 02:54:02 +0000 (0:00:00.329) 0:00:00.329 ******* 2026-03-25 02:54:12.525638 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:12.525649 | orchestrator | ok: [testbed-manager] 2026-03-25 02:54:12.525660 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:12.525670 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:12.525681 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:12.525692 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:12.525702 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:12.525713 | orchestrator | 2026-03-25 02:54:12.525724 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-25 02:54:12.525735 | orchestrator | Wednesday 25 March 2026 02:54:03 +0000 (0:00:01.237) 0:00:01.567 ******* 2026-03-25 02:54:12.525745 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:54:12.525757 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:12.525768 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:12.525778 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:12.525789 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:12.525799 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:12.525810 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:12.525821 | orchestrator | 2026-03-25 02:54:12.525831 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-25 02:54:12.525842 | orchestrator | 2026-03-25 02:54:12.525853 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-25 02:54:12.525864 | orchestrator | Wednesday 25 March 2026 02:54:05 +0000 (0:00:01.808) 0:00:03.375 ******* 2026-03-25 02:54:12.525874 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:12.525887 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:12.525899 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:12.525911 | orchestrator | ok: [testbed-manager] 2026-03-25 02:54:12.525923 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:12.525935 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:12.525947 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:12.525959 | orchestrator | 2026-03-25 02:54:12.525972 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-25 02:54:12.525984 | orchestrator | 2026-03-25 02:54:12.525997 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-25 02:54:12.526009 | orchestrator | Wednesday 25 March 2026 02:54:11 +0000 (0:00:05.816) 0:00:09.192 ******* 2026-03-25 02:54:12.526120 | orchestrator | skipping: [testbed-manager] 2026-03-25 02:54:12.526133 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:12.526146 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:12.526158 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:12.526170 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:12.526182 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:12.526195 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:12.526207 | orchestrator | 2026-03-25 02:54:12.526219 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 02:54:12.526232 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:54:12.526246 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:54:12.526258 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:54:12.526270 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:54:12.526281 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:54:12.526302 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:54:12.526314 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 02:54:12.526324 | orchestrator | 2026-03-25 02:54:12.526335 | orchestrator | 2026-03-25 02:54:12.526346 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 02:54:12.526375 | orchestrator | Wednesday 25 March 2026 02:54:11 +0000 (0:00:00.633) 0:00:09.826 ******* 2026-03-25 02:54:12.526386 | orchestrator | =============================================================================== 2026-03-25 02:54:12.526397 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.82s 2026-03-25 02:54:12.526407 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.81s 2026-03-25 02:54:12.526418 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-03-25 02:54:12.526429 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-03-25 02:54:15.406077 | orchestrator | 2026-03-25 02:54:15 | INFO  | Task 58966610-9336-44b8-8275-10cd9a1e5da2 (ceph) was prepared for execution. 2026-03-25 02:54:15.406156 | orchestrator | 2026-03-25 02:54:15 | INFO  | It takes a moment until task 58966610-9336-44b8-8275-10cd9a1e5da2 (ceph) has been started and output is visible here. 2026-03-25 02:54:36.091338 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-25 02:54:36.091397 | orchestrator | 2.16.14 2026-03-25 02:54:36.091406 | orchestrator | 2026-03-25 02:54:36.091412 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-25 02:54:36.091418 | orchestrator | 2026-03-25 02:54:36.091424 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 02:54:36.091429 | orchestrator | Wednesday 25 March 2026 02:54:21 +0000 (0:00:00.914) 0:00:00.914 ******* 2026-03-25 02:54:36.091435 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:54:36.091441 | orchestrator | 2026-03-25 02:54:36.091446 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 02:54:36.091452 | orchestrator | Wednesday 25 March 2026 02:54:22 +0000 (0:00:01.326) 0:00:02.240 ******* 2026-03-25 02:54:36.091457 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:36.091496 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:36.091506 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:36.091516 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:36.091525 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:36.091533 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:36.091539 | orchestrator | 2026-03-25 02:54:36.091545 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 02:54:36.091550 | orchestrator | Wednesday 25 March 2026 02:54:24 +0000 (0:00:01.339) 0:00:03.580 ******* 2026-03-25 02:54:36.091556 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:36.091561 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:36.091566 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:36.091572 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:36.091577 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:36.091582 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:36.091588 | orchestrator | 2026-03-25 02:54:36.091593 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 02:54:36.091599 | orchestrator | Wednesday 25 March 2026 02:54:25 +0000 (0:00:00.916) 0:00:04.497 ******* 2026-03-25 02:54:36.091604 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:36.091609 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:36.091614 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:36.091620 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:36.091641 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:36.091646 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:36.091652 | orchestrator | 2026-03-25 02:54:36.091657 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 02:54:36.091662 | orchestrator | Wednesday 25 March 2026 02:54:26 +0000 (0:00:00.991) 0:00:05.489 ******* 2026-03-25 02:54:36.091668 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:36.091673 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:36.091678 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:36.091683 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:36.091689 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:36.091694 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:36.091701 | orchestrator | 2026-03-25 02:54:36.091710 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 02:54:36.091722 | orchestrator | Wednesday 25 March 2026 02:54:26 +0000 (0:00:00.926) 0:00:06.415 ******* 2026-03-25 02:54:36.091733 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:36.091742 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:36.091751 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:36.091759 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:36.091768 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:36.091777 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:36.091786 | orchestrator | 2026-03-25 02:54:36.091795 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 02:54:36.091804 | orchestrator | Wednesday 25 March 2026 02:54:27 +0000 (0:00:00.698) 0:00:07.113 ******* 2026-03-25 02:54:36.091813 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:36.091822 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:36.091831 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:36.091840 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:36.091848 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:36.091857 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:36.091865 | orchestrator | 2026-03-25 02:54:36.091874 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 02:54:36.091912 | orchestrator | Wednesday 25 March 2026 02:54:28 +0000 (0:00:01.057) 0:00:08.170 ******* 2026-03-25 02:54:36.091923 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:36.091934 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:36.091944 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:36.091953 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:36.091963 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:36.091973 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:36.091983 | orchestrator | 2026-03-25 02:54:36.091994 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 02:54:36.092005 | orchestrator | Wednesday 25 March 2026 02:54:29 +0000 (0:00:00.763) 0:00:08.934 ******* 2026-03-25 02:54:36.092014 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:36.092020 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:36.092026 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:36.092031 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:36.092037 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:36.092051 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:36.092057 | orchestrator | 2026-03-25 02:54:36.092062 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 02:54:36.092068 | orchestrator | Wednesday 25 March 2026 02:54:30 +0000 (0:00:00.958) 0:00:09.893 ******* 2026-03-25 02:54:36.092073 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 02:54:36.092079 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 02:54:36.092084 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 02:54:36.092090 | orchestrator | 2026-03-25 02:54:36.092095 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 02:54:36.092100 | orchestrator | Wednesday 25 March 2026 02:54:31 +0000 (0:00:00.779) 0:00:10.672 ******* 2026-03-25 02:54:36.092113 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:36.092119 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:36.092124 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:36.092141 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:36.092147 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:36.092152 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:36.092157 | orchestrator | 2026-03-25 02:54:36.092163 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 02:54:36.092168 | orchestrator | Wednesday 25 March 2026 02:54:32 +0000 (0:00:00.887) 0:00:11.560 ******* 2026-03-25 02:54:36.092174 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 02:54:36.092179 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 02:54:36.092187 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 02:54:36.092196 | orchestrator | 2026-03-25 02:54:36.092212 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 02:54:36.092220 | orchestrator | Wednesday 25 March 2026 02:54:34 +0000 (0:00:02.553) 0:00:14.113 ******* 2026-03-25 02:54:36.092228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 02:54:36.092237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 02:54:36.092246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 02:54:36.092254 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:36.092262 | orchestrator | 2026-03-25 02:54:36.092271 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 02:54:36.092279 | orchestrator | Wednesday 25 March 2026 02:54:35 +0000 (0:00:00.434) 0:00:14.548 ******* 2026-03-25 02:54:36.092289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 02:54:36.092299 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 02:54:36.092305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 02:54:36.092311 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:36.092316 | orchestrator | 2026-03-25 02:54:36.092322 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 02:54:36.092327 | orchestrator | Wednesday 25 March 2026 02:54:35 +0000 (0:00:00.669) 0:00:15.217 ******* 2026-03-25 02:54:36.092333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:36.092340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:36.092346 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:36.092357 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:36.092362 | orchestrator | 2026-03-25 02:54:36.092371 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 02:54:36.092377 | orchestrator | Wednesday 25 March 2026 02:54:35 +0000 (0:00:00.137) 0:00:15.356 ******* 2026-03-25 02:54:36.092390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 02:54:33.161302', 'end': '2026-03-25 02:54:33.202828', 'delta': '0:00:00.041526', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 02:54:47.679032 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 02:54:33.743868', 'end': '2026-03-25 02:54:33.796760', 'delta': '0:00:00.052892', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 02:54:47.679154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 02:54:34.327575', 'end': '2026-03-25 02:54:34.371259', 'delta': '0:00:00.043684', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 02:54:47.679175 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.679192 | orchestrator | 2026-03-25 02:54:47.679206 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 02:54:47.679222 | orchestrator | Wednesday 25 March 2026 02:54:36 +0000 (0:00:00.181) 0:00:15.537 ******* 2026-03-25 02:54:47.679235 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:47.679249 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:47.679263 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:47.679276 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:47.679290 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:47.679303 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:47.679316 | orchestrator | 2026-03-25 02:54:47.679329 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 02:54:47.679342 | orchestrator | Wednesday 25 March 2026 02:54:36 +0000 (0:00:00.815) 0:00:16.352 ******* 2026-03-25 02:54:47.679355 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 02:54:47.679368 | orchestrator | 2026-03-25 02:54:47.679380 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 02:54:47.679392 | orchestrator | Wednesday 25 March 2026 02:54:37 +0000 (0:00:00.937) 0:00:17.290 ******* 2026-03-25 02:54:47.679436 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.679449 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.679461 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.679507 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.679522 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.679536 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.679550 | orchestrator | 2026-03-25 02:54:47.679564 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 02:54:47.679579 | orchestrator | Wednesday 25 March 2026 02:54:38 +0000 (0:00:00.994) 0:00:18.284 ******* 2026-03-25 02:54:47.679593 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.679607 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.679621 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.679634 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.679647 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.679661 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.679674 | orchestrator | 2026-03-25 02:54:47.679689 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 02:54:47.679703 | orchestrator | Wednesday 25 March 2026 02:54:40 +0000 (0:00:01.468) 0:00:19.753 ******* 2026-03-25 02:54:47.679718 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.679732 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.679746 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.679760 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.679773 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.679805 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.679819 | orchestrator | 2026-03-25 02:54:47.679832 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 02:54:47.679845 | orchestrator | Wednesday 25 March 2026 02:54:41 +0000 (0:00:00.747) 0:00:20.501 ******* 2026-03-25 02:54:47.679857 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.679871 | orchestrator | 2026-03-25 02:54:47.679885 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 02:54:47.679899 | orchestrator | Wednesday 25 March 2026 02:54:41 +0000 (0:00:00.153) 0:00:20.655 ******* 2026-03-25 02:54:47.679912 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.679926 | orchestrator | 2026-03-25 02:54:47.679939 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 02:54:47.679952 | orchestrator | Wednesday 25 March 2026 02:54:41 +0000 (0:00:00.258) 0:00:20.913 ******* 2026-03-25 02:54:47.679965 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.679978 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.679990 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.680003 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.680016 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.680028 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.680042 | orchestrator | 2026-03-25 02:54:47.680081 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 02:54:47.680095 | orchestrator | Wednesday 25 March 2026 02:54:42 +0000 (0:00:00.924) 0:00:21.838 ******* 2026-03-25 02:54:47.680108 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.680122 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.680135 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.680147 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.680161 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.680174 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.680188 | orchestrator | 2026-03-25 02:54:47.680200 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 02:54:47.680213 | orchestrator | Wednesday 25 March 2026 02:54:43 +0000 (0:00:00.731) 0:00:22.570 ******* 2026-03-25 02:54:47.680225 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.680239 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.680251 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.680279 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.680292 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.680305 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.680318 | orchestrator | 2026-03-25 02:54:47.680332 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 02:54:47.680345 | orchestrator | Wednesday 25 March 2026 02:54:44 +0000 (0:00:00.991) 0:00:23.561 ******* 2026-03-25 02:54:47.680358 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.680370 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.680384 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.680397 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.680409 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.680422 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.680437 | orchestrator | 2026-03-25 02:54:47.680450 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 02:54:47.680463 | orchestrator | Wednesday 25 March 2026 02:54:44 +0000 (0:00:00.717) 0:00:24.278 ******* 2026-03-25 02:54:47.680600 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.680620 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.680633 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.680643 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.680654 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.680665 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.680675 | orchestrator | 2026-03-25 02:54:47.680686 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 02:54:47.680697 | orchestrator | Wednesday 25 March 2026 02:54:45 +0000 (0:00:01.012) 0:00:25.291 ******* 2026-03-25 02:54:47.680709 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.680720 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.680731 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.680741 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.680751 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.680761 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.680771 | orchestrator | 2026-03-25 02:54:47.680782 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 02:54:47.680793 | orchestrator | Wednesday 25 March 2026 02:54:46 +0000 (0:00:00.718) 0:00:26.010 ******* 2026-03-25 02:54:47.680804 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.680815 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:47.680825 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:47.680835 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:47.680859 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:47.680878 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:47.680888 | orchestrator | 2026-03-25 02:54:47.680898 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 02:54:47.680909 | orchestrator | Wednesday 25 March 2026 02:54:47 +0000 (0:00:00.960) 0:00:26.970 ******* 2026-03-25 02:54:47.680923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.680947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.680982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.836576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.836662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.836672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.836680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.836687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.836693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.836699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.836738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:47.836765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:47.836774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:47.836781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:47.836797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.836811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-42-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:47.975850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.975939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.975952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.975961 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:47.975970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.975978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.975986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.976029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.976037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.976045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.976073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:47.976082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.976099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:47.976107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:47.976120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.166104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.166191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.166200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.166205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.166242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.166247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.166251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.166255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.166274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.166281 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:48.166287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.166305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.166320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.166332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.429688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.429784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.429819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.429945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.429951 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:48.429956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.429969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.685785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.685790 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:48.685795 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:48.685799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.685833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 02:54:48.930824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.930901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 02:54:48.930909 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:48.930914 | orchestrator | 2026-03-25 02:54:48.930919 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 02:54:48.930924 | orchestrator | Wednesday 25 March 2026 02:54:48 +0000 (0:00:01.161) 0:00:28.132 ******* 2026-03-25 02:54:48.930930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.930967 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.930974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.930981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.930993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.930998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.931004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.931021 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.931032 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.964744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.964849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.964862 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.964887 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.964913 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.964925 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.964930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.964937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:48.964946 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354850 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-42-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354917 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.354937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.558901 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.558989 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559019 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559027 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559034 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559068 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559075 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559088 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:49.559095 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559102 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559108 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.559127 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653122 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653135 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653160 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653169 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653189 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653197 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653236 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653244 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.653261 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837137 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837217 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837244 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837288 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837315 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:49.837322 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837329 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837335 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837340 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837346 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837356 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:49.837371 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083307 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083443 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083563 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083617 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:50.083631 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:50.083641 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:50.083673 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083685 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083696 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083706 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083716 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083738 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083748 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:50.083772 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:58.497797 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:58.497968 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 02:54:58.497992 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:58.498004 | orchestrator | 2026-03-25 02:54:58.498090 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 02:54:58.498107 | orchestrator | Wednesday 25 March 2026 02:54:50 +0000 (0:00:01.397) 0:00:29.529 ******* 2026-03-25 02:54:58.498118 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:58.498130 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:58.498141 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:58.498153 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:58.498164 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:58.498175 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:58.498185 | orchestrator | 2026-03-25 02:54:58.498197 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 02:54:58.498208 | orchestrator | Wednesday 25 March 2026 02:54:51 +0000 (0:00:01.053) 0:00:30.582 ******* 2026-03-25 02:54:58.498220 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:54:58.498230 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:54:58.498241 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:54:58.498252 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:54:58.498263 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:54:58.498274 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:54:58.498284 | orchestrator | 2026-03-25 02:54:58.498296 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 02:54:58.498308 | orchestrator | Wednesday 25 March 2026 02:54:52 +0000 (0:00:00.907) 0:00:31.489 ******* 2026-03-25 02:54:58.498320 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:58.498332 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:58.498343 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:58.498375 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:58.498386 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:58.498395 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:58.498406 | orchestrator | 2026-03-25 02:54:58.498417 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 02:54:58.498429 | orchestrator | Wednesday 25 March 2026 02:54:52 +0000 (0:00:00.695) 0:00:32.185 ******* 2026-03-25 02:54:58.498440 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:58.498449 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:58.498460 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:58.498471 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:58.498481 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:58.498516 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:58.498528 | orchestrator | 2026-03-25 02:54:58.498539 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 02:54:58.498550 | orchestrator | Wednesday 25 March 2026 02:54:53 +0000 (0:00:00.945) 0:00:33.130 ******* 2026-03-25 02:54:58.498560 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:58.498570 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:58.498581 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:58.498608 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:58.498619 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:58.498631 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:58.498642 | orchestrator | 2026-03-25 02:54:58.498654 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 02:54:58.498663 | orchestrator | Wednesday 25 March 2026 02:54:54 +0000 (0:00:00.761) 0:00:33.892 ******* 2026-03-25 02:54:58.498672 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:58.498685 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:58.498695 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:58.498708 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:58.498717 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:58.498727 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:58.498752 | orchestrator | 2026-03-25 02:54:58.498772 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 02:54:58.498784 | orchestrator | Wednesday 25 March 2026 02:54:55 +0000 (0:00:00.946) 0:00:34.838 ******* 2026-03-25 02:54:58.498794 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-25 02:54:58.498806 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-25 02:54:58.498816 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-25 02:54:58.498827 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-25 02:54:58.498838 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-25 02:54:58.498849 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-25 02:54:58.498860 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-25 02:54:58.498870 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 02:54:58.498881 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-25 02:54:58.498892 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-25 02:54:58.498903 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-25 02:54:58.498914 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-25 02:54:58.498925 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 02:54:58.498936 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-25 02:54:58.498946 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-25 02:54:58.498957 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-25 02:54:58.498968 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-25 02:54:58.498989 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 02:54:58.499000 | orchestrator | 2026-03-25 02:54:58.499012 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 02:54:58.499023 | orchestrator | Wednesday 25 March 2026 02:54:57 +0000 (0:00:02.035) 0:00:36.874 ******* 2026-03-25 02:54:58.499035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 02:54:58.499046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 02:54:58.499056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 02:54:58.499068 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:54:58.499079 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 02:54:58.499090 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 02:54:58.499099 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 02:54:58.499109 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:54:58.499120 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-25 02:54:58.499131 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-25 02:54:58.499142 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-25 02:54:58.499153 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:54:58.499165 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 02:54:58.499175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 02:54:58.499196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 02:54:58.499208 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:54:58.499219 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 02:54:58.499229 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 02:54:58.499239 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 02:54:58.499250 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:54:58.499262 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 02:54:58.499273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 02:54:58.499284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 02:54:58.499294 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:54:58.499305 | orchestrator | 2026-03-25 02:54:58.499316 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 02:54:58.499340 | orchestrator | Wednesday 25 March 2026 02:54:58 +0000 (0:00:01.071) 0:00:37.945 ******* 2026-03-25 02:55:19.650101 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:19.650180 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:19.650185 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:19.650190 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:55:19.650195 | orchestrator | 2026-03-25 02:55:19.650200 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 02:55:19.650206 | orchestrator | Wednesday 25 March 2026 02:54:59 +0000 (0:00:01.210) 0:00:39.155 ******* 2026-03-25 02:55:19.650211 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.650215 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:19.650219 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:19.650223 | orchestrator | 2026-03-25 02:55:19.650227 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 02:55:19.650231 | orchestrator | Wednesday 25 March 2026 02:55:00 +0000 (0:00:00.399) 0:00:39.555 ******* 2026-03-25 02:55:19.650235 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.650239 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:19.650243 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:19.650262 | orchestrator | 2026-03-25 02:55:19.650267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 02:55:19.650273 | orchestrator | Wednesday 25 March 2026 02:55:00 +0000 (0:00:00.382) 0:00:39.938 ******* 2026-03-25 02:55:19.650279 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.650285 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:19.650291 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:19.650297 | orchestrator | 2026-03-25 02:55:19.650303 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 02:55:19.650308 | orchestrator | Wednesday 25 March 2026 02:55:00 +0000 (0:00:00.359) 0:00:40.298 ******* 2026-03-25 02:55:19.650314 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:19.650321 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:19.650327 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:19.650330 | orchestrator | 2026-03-25 02:55:19.650334 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 02:55:19.650338 | orchestrator | Wednesday 25 March 2026 02:55:01 +0000 (0:00:00.785) 0:00:41.083 ******* 2026-03-25 02:55:19.650343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:55:19.650347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:55:19.650351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:55:19.650355 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.650359 | orchestrator | 2026-03-25 02:55:19.650363 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 02:55:19.650386 | orchestrator | Wednesday 25 March 2026 02:55:02 +0000 (0:00:00.425) 0:00:41.509 ******* 2026-03-25 02:55:19.650390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:55:19.650394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:55:19.650398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:55:19.650401 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.650405 | orchestrator | 2026-03-25 02:55:19.650409 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 02:55:19.650413 | orchestrator | Wednesday 25 March 2026 02:55:02 +0000 (0:00:00.443) 0:00:41.952 ******* 2026-03-25 02:55:19.650426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:55:19.650432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:55:19.650439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:55:19.650444 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.650450 | orchestrator | 2026-03-25 02:55:19.650456 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 02:55:19.650461 | orchestrator | Wednesday 25 March 2026 02:55:02 +0000 (0:00:00.453) 0:00:42.405 ******* 2026-03-25 02:55:19.650467 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:19.650473 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:19.650478 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:19.650484 | orchestrator | 2026-03-25 02:55:19.650489 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 02:55:19.650495 | orchestrator | Wednesday 25 March 2026 02:55:03 +0000 (0:00:00.397) 0:00:42.802 ******* 2026-03-25 02:55:19.650501 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 02:55:19.650554 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 02:55:19.650562 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 02:55:19.650568 | orchestrator | 2026-03-25 02:55:19.650574 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 02:55:19.650581 | orchestrator | Wednesday 25 March 2026 02:55:04 +0000 (0:00:01.174) 0:00:43.977 ******* 2026-03-25 02:55:19.650587 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 02:55:19.650595 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 02:55:19.650601 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 02:55:19.650608 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 02:55:19.650615 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 02:55:19.650621 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 02:55:19.650628 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 02:55:19.650634 | orchestrator | 2026-03-25 02:55:19.650641 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 02:55:19.650647 | orchestrator | Wednesday 25 March 2026 02:55:05 +0000 (0:00:00.992) 0:00:44.970 ******* 2026-03-25 02:55:19.650670 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 02:55:19.650676 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 02:55:19.650683 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 02:55:19.650689 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 02:55:19.650696 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 02:55:19.650703 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 02:55:19.650719 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 02:55:19.650725 | orchestrator | 2026-03-25 02:55:19.650732 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 02:55:19.650746 | orchestrator | Wednesday 25 March 2026 02:55:07 +0000 (0:00:02.360) 0:00:47.330 ******* 2026-03-25 02:55:19.650755 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:55:19.650764 | orchestrator | 2026-03-25 02:55:19.650771 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 02:55:19.650778 | orchestrator | Wednesday 25 March 2026 02:55:09 +0000 (0:00:01.504) 0:00:48.835 ******* 2026-03-25 02:55:19.650785 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:55:19.650791 | orchestrator | 2026-03-25 02:55:19.650798 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 02:55:19.650805 | orchestrator | Wednesday 25 March 2026 02:55:10 +0000 (0:00:01.500) 0:00:50.335 ******* 2026-03-25 02:55:19.650812 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.650818 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:19.650824 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:19.650830 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:55:19.650835 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:55:19.650841 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:55:19.650847 | orchestrator | 2026-03-25 02:55:19.650854 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 02:55:19.650860 | orchestrator | Wednesday 25 March 2026 02:55:12 +0000 (0:00:01.350) 0:00:51.686 ******* 2026-03-25 02:55:19.650866 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:19.650873 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:19.650879 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:19.650885 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:19.650891 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:19.650897 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:19.650903 | orchestrator | 2026-03-25 02:55:19.650911 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 02:55:19.650915 | orchestrator | Wednesday 25 March 2026 02:55:12 +0000 (0:00:00.748) 0:00:52.434 ******* 2026-03-25 02:55:19.650919 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:19.650923 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:19.650927 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:19.650930 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:19.650934 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:19.650938 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:19.650946 | orchestrator | 2026-03-25 02:55:19.650950 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 02:55:19.650954 | orchestrator | Wednesday 25 March 2026 02:55:14 +0000 (0:00:01.752) 0:00:54.187 ******* 2026-03-25 02:55:19.650958 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:19.650961 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:19.650965 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:19.650969 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:19.650973 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:19.650976 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:19.650980 | orchestrator | 2026-03-25 02:55:19.650984 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 02:55:19.650987 | orchestrator | Wednesday 25 March 2026 02:55:15 +0000 (0:00:00.741) 0:00:54.928 ******* 2026-03-25 02:55:19.650991 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.650995 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:19.650998 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:19.651002 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:55:19.651006 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:55:19.651010 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:55:19.651013 | orchestrator | 2026-03-25 02:55:19.651017 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 02:55:19.651025 | orchestrator | Wednesday 25 March 2026 02:55:16 +0000 (0:00:01.361) 0:00:56.290 ******* 2026-03-25 02:55:19.651028 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.651032 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:19.651036 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:19.651039 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:19.651043 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:19.651047 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:19.651051 | orchestrator | 2026-03-25 02:55:19.651054 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 02:55:19.651058 | orchestrator | Wednesday 25 March 2026 02:55:17 +0000 (0:00:00.702) 0:00:56.993 ******* 2026-03-25 02:55:19.651062 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:19.651065 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:19.651069 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:19.651073 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:19.651076 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:19.651080 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:19.651084 | orchestrator | 2026-03-25 02:55:19.651088 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 02:55:19.651091 | orchestrator | Wednesday 25 March 2026 02:55:18 +0000 (0:00:00.952) 0:00:57.946 ******* 2026-03-25 02:55:19.651095 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:19.651103 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:40.620038 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:40.620153 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:55:40.620169 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:55:40.620182 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:55:40.620194 | orchestrator | 2026-03-25 02:55:40.620207 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 02:55:40.620219 | orchestrator | Wednesday 25 March 2026 02:55:19 +0000 (0:00:01.149) 0:00:59.095 ******* 2026-03-25 02:55:40.620230 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:40.620241 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:40.620252 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:40.620262 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:55:40.620273 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:55:40.620283 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:55:40.620294 | orchestrator | 2026-03-25 02:55:40.620304 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 02:55:40.620315 | orchestrator | Wednesday 25 March 2026 02:55:21 +0000 (0:00:01.499) 0:01:00.594 ******* 2026-03-25 02:55:40.620326 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:40.620338 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:40.620348 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:40.620360 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:40.620371 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:40.620381 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:40.620392 | orchestrator | 2026-03-25 02:55:40.620402 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 02:55:40.620413 | orchestrator | Wednesday 25 March 2026 02:55:21 +0000 (0:00:00.693) 0:01:01.288 ******* 2026-03-25 02:55:40.620424 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:40.620434 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:40.620445 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:40.620455 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:55:40.620466 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:55:40.620477 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:55:40.620487 | orchestrator | 2026-03-25 02:55:40.620498 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 02:55:40.620509 | orchestrator | Wednesday 25 March 2026 02:55:22 +0000 (0:00:00.951) 0:01:02.240 ******* 2026-03-25 02:55:40.620520 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:40.620630 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:40.620671 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:40.620685 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:40.620714 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:40.620726 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:40.620738 | orchestrator | 2026-03-25 02:55:40.620750 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 02:55:40.620775 | orchestrator | Wednesday 25 March 2026 02:55:23 +0000 (0:00:00.658) 0:01:02.898 ******* 2026-03-25 02:55:40.620799 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:40.620811 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:40.620823 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:40.620835 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:40.620847 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:40.620859 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:40.620871 | orchestrator | 2026-03-25 02:55:40.620884 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 02:55:40.620896 | orchestrator | Wednesday 25 March 2026 02:55:24 +0000 (0:00:00.987) 0:01:03.885 ******* 2026-03-25 02:55:40.620908 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:40.620921 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:40.620932 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:40.620942 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:40.620953 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:40.620978 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:40.620990 | orchestrator | 2026-03-25 02:55:40.621001 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 02:55:40.621012 | orchestrator | Wednesday 25 March 2026 02:55:25 +0000 (0:00:00.663) 0:01:04.549 ******* 2026-03-25 02:55:40.621022 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:40.621033 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:40.621044 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:40.621054 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:40.621065 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:40.621076 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:40.621086 | orchestrator | 2026-03-25 02:55:40.621097 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 02:55:40.621108 | orchestrator | Wednesday 25 March 2026 02:55:26 +0000 (0:00:00.982) 0:01:05.531 ******* 2026-03-25 02:55:40.621119 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:40.621130 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:40.621141 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:40.621151 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:40.621162 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:40.621172 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:40.621183 | orchestrator | 2026-03-25 02:55:40.621194 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 02:55:40.621205 | orchestrator | Wednesday 25 March 2026 02:55:26 +0000 (0:00:00.687) 0:01:06.219 ******* 2026-03-25 02:55:40.621215 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:40.621226 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:40.621237 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:40.621247 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:55:40.621258 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:55:40.621269 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:55:40.621279 | orchestrator | 2026-03-25 02:55:40.621290 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 02:55:40.621301 | orchestrator | Wednesday 25 March 2026 02:55:27 +0000 (0:00:00.934) 0:01:07.154 ******* 2026-03-25 02:55:40.621312 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:40.621322 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:40.621333 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:40.621344 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:55:40.621354 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:55:40.621365 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:55:40.621385 | orchestrator | 2026-03-25 02:55:40.621396 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 02:55:40.621407 | orchestrator | Wednesday 25 March 2026 02:55:28 +0000 (0:00:00.708) 0:01:07.862 ******* 2026-03-25 02:55:40.621418 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:55:40.621450 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:55:40.621462 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:55:40.621473 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:55:40.621483 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:55:40.621494 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:55:40.621505 | orchestrator | 2026-03-25 02:55:40.621516 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 02:55:40.621557 | orchestrator | Wednesday 25 March 2026 02:55:29 +0000 (0:00:01.475) 0:01:09.338 ******* 2026-03-25 02:55:40.621569 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:55:40.621580 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:55:40.621591 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:55:40.621601 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:55:40.621612 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:55:40.621623 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:55:40.621633 | orchestrator | 2026-03-25 02:55:40.621644 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 02:55:40.621655 | orchestrator | Wednesday 25 March 2026 02:55:31 +0000 (0:00:01.828) 0:01:11.167 ******* 2026-03-25 02:55:40.621666 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:55:40.621676 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:55:40.621687 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:55:40.621698 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:55:40.621708 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:55:40.621719 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:55:40.621730 | orchestrator | 2026-03-25 02:55:40.621740 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 02:55:40.621751 | orchestrator | Wednesday 25 March 2026 02:55:33 +0000 (0:00:02.149) 0:01:13.316 ******* 2026-03-25 02:55:40.621763 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:55:40.621776 | orchestrator | 2026-03-25 02:55:40.621787 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 02:55:40.621798 | orchestrator | Wednesday 25 March 2026 02:55:35 +0000 (0:00:01.699) 0:01:15.016 ******* 2026-03-25 02:55:40.621808 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:40.621819 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:40.621830 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:40.621840 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:40.621851 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:40.621862 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:40.621872 | orchestrator | 2026-03-25 02:55:40.621883 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 02:55:40.621894 | orchestrator | Wednesday 25 March 2026 02:55:36 +0000 (0:00:00.694) 0:01:15.711 ******* 2026-03-25 02:55:40.621905 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:40.621916 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:40.621926 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:40.621937 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:40.621947 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:40.621958 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:40.621969 | orchestrator | 2026-03-25 02:55:40.621980 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 02:55:40.621991 | orchestrator | Wednesday 25 March 2026 02:55:37 +0000 (0:00:00.930) 0:01:16.641 ******* 2026-03-25 02:55:40.622001 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 02:55:40.622128 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 02:55:40.622160 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 02:55:40.622171 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 02:55:40.622182 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 02:55:40.622193 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 02:55:40.622205 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 02:55:40.622215 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 02:55:40.622226 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 02:55:40.622237 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 02:55:40.622248 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 02:55:40.622259 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 02:55:40.622270 | orchestrator | 2026-03-25 02:55:40.622280 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 02:55:40.622291 | orchestrator | Wednesday 25 March 2026 02:55:38 +0000 (0:00:01.411) 0:01:18.053 ******* 2026-03-25 02:55:40.622302 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:55:40.622313 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:55:40.622324 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:55:40.622334 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:55:40.622345 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:55:40.622355 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:55:40.622366 | orchestrator | 2026-03-25 02:55:40.622377 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 02:55:40.622387 | orchestrator | Wednesday 25 March 2026 02:55:39 +0000 (0:00:01.276) 0:01:19.330 ******* 2026-03-25 02:55:40.622398 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:55:40.622409 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:55:40.622419 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:55:40.622430 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:55:40.622441 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:55:40.622451 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:55:40.622462 | orchestrator | 2026-03-25 02:55:40.622484 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 02:57:03.571723 | orchestrator | Wednesday 25 March 2026 02:55:40 +0000 (0:00:00.731) 0:01:20.061 ******* 2026-03-25 02:57:03.571863 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.571891 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.571908 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.571925 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.571940 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.571956 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.571972 | orchestrator | 2026-03-25 02:57:03.571990 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 02:57:03.572009 | orchestrator | Wednesday 25 March 2026 02:55:41 +0000 (0:00:00.975) 0:01:21.037 ******* 2026-03-25 02:57:03.572024 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.572041 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.572052 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.572061 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.572071 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.572080 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.572090 | orchestrator | 2026-03-25 02:57:03.572100 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 02:57:03.572110 | orchestrator | Wednesday 25 March 2026 02:55:42 +0000 (0:00:00.686) 0:01:21.724 ******* 2026-03-25 02:57:03.572149 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:57:03.572162 | orchestrator | 2026-03-25 02:57:03.572172 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 02:57:03.572182 | orchestrator | Wednesday 25 March 2026 02:55:43 +0000 (0:00:01.422) 0:01:23.146 ******* 2026-03-25 02:57:03.572191 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:03.572222 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:57:03.572244 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:03.572255 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:57:03.572265 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:03.572275 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:57:03.572284 | orchestrator | 2026-03-25 02:57:03.572294 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 02:57:03.572305 | orchestrator | Wednesday 25 March 2026 02:56:49 +0000 (0:01:05.391) 0:02:28.538 ******* 2026-03-25 02:57:03.572314 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 02:57:03.572324 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 02:57:03.572333 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 02:57:03.572343 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.572352 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 02:57:03.572362 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 02:57:03.572371 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 02:57:03.572381 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.572390 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 02:57:03.572399 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 02:57:03.572425 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 02:57:03.572435 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.572445 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 02:57:03.572454 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 02:57:03.572463 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 02:57:03.572479 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.572496 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 02:57:03.572514 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 02:57:03.572533 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 02:57:03.572550 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.572566 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 02:57:03.572578 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 02:57:03.572594 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 02:57:03.572692 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.572710 | orchestrator | 2026-03-25 02:57:03.572727 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 02:57:03.572742 | orchestrator | Wednesday 25 March 2026 02:56:49 +0000 (0:00:00.785) 0:02:29.323 ******* 2026-03-25 02:57:03.572758 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.572774 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.572790 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.572806 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.572822 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.572853 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.572869 | orchestrator | 2026-03-25 02:57:03.572885 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 02:57:03.572895 | orchestrator | Wednesday 25 March 2026 02:56:50 +0000 (0:00:00.959) 0:02:30.282 ******* 2026-03-25 02:57:03.572905 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.572914 | orchestrator | 2026-03-25 02:57:03.572924 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 02:57:03.572933 | orchestrator | Wednesday 25 March 2026 02:56:51 +0000 (0:00:00.177) 0:02:30.460 ******* 2026-03-25 02:57:03.572943 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.572977 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.572987 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.572997 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.573007 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.573016 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.573026 | orchestrator | 2026-03-25 02:57:03.573035 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 02:57:03.573044 | orchestrator | Wednesday 25 March 2026 02:56:51 +0000 (0:00:00.807) 0:02:31.267 ******* 2026-03-25 02:57:03.573054 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.573063 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.573073 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.573082 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.573092 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.573101 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.573110 | orchestrator | 2026-03-25 02:57:03.573120 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 02:57:03.573130 | orchestrator | Wednesday 25 March 2026 02:56:52 +0000 (0:00:00.962) 0:02:32.230 ******* 2026-03-25 02:57:03.573139 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.573149 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.573158 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.573168 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.573177 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.573186 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.573196 | orchestrator | 2026-03-25 02:57:03.573206 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 02:57:03.573215 | orchestrator | Wednesday 25 March 2026 02:56:53 +0000 (0:00:00.711) 0:02:32.942 ******* 2026-03-25 02:57:03.573224 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:03.573234 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:03.573244 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:03.573253 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:57:03.573263 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:57:03.573272 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:57:03.573281 | orchestrator | 2026-03-25 02:57:03.573291 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 02:57:03.573301 | orchestrator | Wednesday 25 March 2026 02:56:56 +0000 (0:00:03.432) 0:02:36.374 ******* 2026-03-25 02:57:03.573310 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:03.573320 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:03.573329 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:03.573339 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:57:03.573348 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:57:03.573358 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:57:03.573367 | orchestrator | 2026-03-25 02:57:03.573388 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 02:57:03.573398 | orchestrator | Wednesday 25 March 2026 02:56:57 +0000 (0:00:00.695) 0:02:37.070 ******* 2026-03-25 02:57:03.573409 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:57:03.573421 | orchestrator | 2026-03-25 02:57:03.573430 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 02:57:03.573447 | orchestrator | Wednesday 25 March 2026 02:56:59 +0000 (0:00:01.548) 0:02:38.619 ******* 2026-03-25 02:57:03.573456 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.573466 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.573475 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.573485 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.573503 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.573512 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.573521 | orchestrator | 2026-03-25 02:57:03.573531 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 02:57:03.573541 | orchestrator | Wednesday 25 March 2026 02:57:00 +0000 (0:00:00.939) 0:02:39.559 ******* 2026-03-25 02:57:03.573550 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.573559 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.573568 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.573577 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.573590 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.573630 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.573648 | orchestrator | 2026-03-25 02:57:03.573665 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 02:57:03.573681 | orchestrator | Wednesday 25 March 2026 02:57:00 +0000 (0:00:00.725) 0:02:40.285 ******* 2026-03-25 02:57:03.573697 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.573707 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.573717 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.573726 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.573735 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.573744 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.573754 | orchestrator | 2026-03-25 02:57:03.573763 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 02:57:03.573773 | orchestrator | Wednesday 25 March 2026 02:57:01 +0000 (0:00:00.963) 0:02:41.248 ******* 2026-03-25 02:57:03.573782 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.573792 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.573801 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.573810 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.573820 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.573829 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.573838 | orchestrator | 2026-03-25 02:57:03.573848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 02:57:03.573857 | orchestrator | Wednesday 25 March 2026 02:57:02 +0000 (0:00:00.726) 0:02:41.975 ******* 2026-03-25 02:57:03.573866 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:03.573876 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:03.573885 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:03.573894 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:03.573904 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:03.573913 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:03.573922 | orchestrator | 2026-03-25 02:57:03.573932 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 02:57:03.573949 | orchestrator | Wednesday 25 March 2026 02:57:03 +0000 (0:00:01.039) 0:02:43.015 ******* 2026-03-25 02:57:15.428422 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:15.428523 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:15.428535 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:15.428542 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:15.428548 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:15.428555 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:15.428562 | orchestrator | 2026-03-25 02:57:15.428569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 02:57:15.428577 | orchestrator | Wednesday 25 March 2026 02:57:04 +0000 (0:00:00.721) 0:02:43.736 ******* 2026-03-25 02:57:15.428641 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:15.428647 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:15.428652 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:15.428656 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:15.428660 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:15.428663 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:15.428667 | orchestrator | 2026-03-25 02:57:15.428671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 02:57:15.428675 | orchestrator | Wednesday 25 March 2026 02:57:05 +0000 (0:00:01.030) 0:02:44.767 ******* 2026-03-25 02:57:15.428679 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:15.428683 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:15.428687 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:15.428690 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:15.428694 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:15.428697 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:15.428701 | orchestrator | 2026-03-25 02:57:15.428705 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 02:57:15.428709 | orchestrator | Wednesday 25 March 2026 02:57:06 +0000 (0:00:00.695) 0:02:45.463 ******* 2026-03-25 02:57:15.428713 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:15.428717 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:15.428721 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:15.428725 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:57:15.428729 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:57:15.428732 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:57:15.428736 | orchestrator | 2026-03-25 02:57:15.428740 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 02:57:15.428744 | orchestrator | Wednesday 25 March 2026 02:57:07 +0000 (0:00:01.497) 0:02:46.961 ******* 2026-03-25 02:57:15.428748 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:57:15.428753 | orchestrator | 2026-03-25 02:57:15.428757 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 02:57:15.428761 | orchestrator | Wednesday 25 March 2026 02:57:08 +0000 (0:00:01.472) 0:02:48.433 ******* 2026-03-25 02:57:15.428765 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-25 02:57:15.428769 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-25 02:57:15.428773 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-25 02:57:15.428777 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-25 02:57:15.428781 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-25 02:57:15.428784 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-25 02:57:15.428788 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-25 02:57:15.428802 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-25 02:57:15.428806 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-25 02:57:15.428809 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-25 02:57:15.428813 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-25 02:57:15.428817 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-25 02:57:15.428820 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-25 02:57:15.428824 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-25 02:57:15.428828 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-25 02:57:15.428832 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-25 02:57:15.428836 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-25 02:57:15.428839 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-25 02:57:15.428843 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-25 02:57:15.428852 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-25 02:57:15.428856 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-25 02:57:15.428860 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-25 02:57:15.428863 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-25 02:57:15.428867 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-25 02:57:15.428871 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-25 02:57:15.428875 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-25 02:57:15.428879 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-25 02:57:15.428882 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-25 02:57:15.428886 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-25 02:57:15.428890 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-25 02:57:15.428893 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-25 02:57:15.428897 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-25 02:57:15.428901 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-25 02:57:15.428905 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-25 02:57:15.428908 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-25 02:57:15.428924 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-25 02:57:15.428928 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-25 02:57:15.428932 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-25 02:57:15.428936 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 02:57:15.428939 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-25 02:57:15.428943 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-25 02:57:15.428954 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-25 02:57:15.428958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-25 02:57:15.428962 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-25 02:57:15.428965 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 02:57:15.428970 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-25 02:57:15.428974 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-25 02:57:15.428978 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-25 02:57:15.428982 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 02:57:15.428987 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 02:57:15.428991 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 02:57:15.428995 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-25 02:57:15.428999 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-25 02:57:15.429003 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 02:57:15.429008 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 02:57:15.429012 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 02:57:15.429016 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 02:57:15.429020 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 02:57:15.429024 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 02:57:15.429029 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 02:57:15.429033 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 02:57:15.429042 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 02:57:15.429046 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 02:57:15.429050 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 02:57:15.429055 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 02:57:15.429059 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 02:57:15.429063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 02:57:15.429070 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 02:57:15.429075 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 02:57:15.429079 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 02:57:15.429083 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 02:57:15.429088 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 02:57:15.429092 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 02:57:15.429096 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 02:57:15.429101 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-25 02:57:15.429105 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 02:57:15.429109 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 02:57:15.429114 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 02:57:15.429118 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 02:57:15.429121 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 02:57:15.429126 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-25 02:57:15.429132 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 02:57:15.429138 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-25 02:57:15.429144 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 02:57:15.429154 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 02:57:15.429160 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-25 02:57:15.429168 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 02:57:15.429173 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-25 02:57:15.429179 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 02:57:15.429185 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-25 02:57:15.429191 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-25 02:57:15.429201 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-25 02:57:31.685541 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-25 02:57:31.685674 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-25 02:57:31.685683 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-25 02:57:31.685689 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-25 02:57:31.685694 | orchestrator | 2026-03-25 02:57:31.685699 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 02:57:31.685706 | orchestrator | Wednesday 25 March 2026 02:57:15 +0000 (0:00:06.412) 0:02:54.846 ******* 2026-03-25 02:57:31.685711 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.685716 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.685721 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.685727 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:57:31.685752 | orchestrator | 2026-03-25 02:57:31.685757 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-25 02:57:31.685762 | orchestrator | Wednesday 25 March 2026 02:57:16 +0000 (0:00:01.268) 0:02:56.114 ******* 2026-03-25 02:57:31.685767 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 02:57:31.685772 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 02:57:31.685777 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 02:57:31.685782 | orchestrator | 2026-03-25 02:57:31.685786 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-25 02:57:31.685791 | orchestrator | Wednesday 25 March 2026 02:57:17 +0000 (0:00:00.739) 0:02:56.854 ******* 2026-03-25 02:57:31.685795 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 02:57:31.685800 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 02:57:31.685805 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 02:57:31.685809 | orchestrator | 2026-03-25 02:57:31.685814 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 02:57:31.685818 | orchestrator | Wednesday 25 March 2026 02:57:18 +0000 (0:00:01.230) 0:02:58.084 ******* 2026-03-25 02:57:31.685823 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:31.685828 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:31.685832 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:31.685837 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.685841 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.685846 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.685850 | orchestrator | 2026-03-25 02:57:31.685855 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 02:57:31.685870 | orchestrator | Wednesday 25 March 2026 02:57:19 +0000 (0:00:01.039) 0:02:59.124 ******* 2026-03-25 02:57:31.685875 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:31.685880 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:31.685884 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:31.685889 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.685893 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.685898 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.685903 | orchestrator | 2026-03-25 02:57:31.685907 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 02:57:31.685912 | orchestrator | Wednesday 25 March 2026 02:57:20 +0000 (0:00:00.764) 0:02:59.888 ******* 2026-03-25 02:57:31.685916 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:31.685921 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:31.685925 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:31.685930 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.685935 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.685939 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.685944 | orchestrator | 2026-03-25 02:57:31.685949 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 02:57:31.685953 | orchestrator | Wednesday 25 March 2026 02:57:21 +0000 (0:00:00.977) 0:03:00.865 ******* 2026-03-25 02:57:31.685958 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:31.685962 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:31.685967 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:31.685971 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.685976 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.685980 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.685990 | orchestrator | 2026-03-25 02:57:31.685995 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 02:57:31.685999 | orchestrator | Wednesday 25 March 2026 02:57:22 +0000 (0:00:00.735) 0:03:01.600 ******* 2026-03-25 02:57:31.686004 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:31.686009 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:31.686052 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:31.686057 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.686062 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.686066 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.686071 | orchestrator | 2026-03-25 02:57:31.686075 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 02:57:31.686080 | orchestrator | Wednesday 25 March 2026 02:57:23 +0000 (0:00:00.932) 0:03:02.533 ******* 2026-03-25 02:57:31.686085 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:31.686090 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:31.686094 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:31.686099 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.686115 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.686121 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.686126 | orchestrator | 2026-03-25 02:57:31.686131 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 02:57:31.686136 | orchestrator | Wednesday 25 March 2026 02:57:23 +0000 (0:00:00.685) 0:03:03.219 ******* 2026-03-25 02:57:31.686142 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:31.686150 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:31.686157 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:31.686164 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.686173 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.686180 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.686188 | orchestrator | 2026-03-25 02:57:31.686195 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 02:57:31.686281 | orchestrator | Wednesday 25 March 2026 02:57:24 +0000 (0:00:00.919) 0:03:04.138 ******* 2026-03-25 02:57:31.686289 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:31.686297 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:31.686304 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:31.686312 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.686320 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.686327 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.686335 | orchestrator | 2026-03-25 02:57:31.686342 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 02:57:31.686350 | orchestrator | Wednesday 25 March 2026 02:57:25 +0000 (0:00:00.579) 0:03:04.718 ******* 2026-03-25 02:57:31.686358 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.686366 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.686372 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.686376 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:31.686382 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:31.686387 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:31.686392 | orchestrator | 2026-03-25 02:57:31.686397 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 02:57:31.686402 | orchestrator | Wednesday 25 March 2026 02:57:28 +0000 (0:00:02.751) 0:03:07.470 ******* 2026-03-25 02:57:31.686407 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:31.686412 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:31.686417 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:31.686423 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.686428 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.686434 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.686439 | orchestrator | 2026-03-25 02:57:31.686444 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 02:57:31.686457 | orchestrator | Wednesday 25 March 2026 02:57:28 +0000 (0:00:00.703) 0:03:08.174 ******* 2026-03-25 02:57:31.686462 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:31.686467 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:31.686473 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:31.686478 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.686483 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.686488 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.686493 | orchestrator | 2026-03-25 02:57:31.686498 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 02:57:31.686502 | orchestrator | Wednesday 25 March 2026 02:57:29 +0000 (0:00:01.020) 0:03:09.195 ******* 2026-03-25 02:57:31.686507 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:31.686511 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:31.686521 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:31.686526 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.686530 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.686535 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.686539 | orchestrator | 2026-03-25 02:57:31.686544 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 02:57:31.686549 | orchestrator | Wednesday 25 March 2026 02:57:30 +0000 (0:00:00.699) 0:03:09.894 ******* 2026-03-25 02:57:31.686553 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 02:57:31.686558 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 02:57:31.686563 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 02:57:31.686567 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:31.686572 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:31.686576 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:31.686581 | orchestrator | 2026-03-25 02:57:31.686585 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 02:57:31.686590 | orchestrator | Wednesday 25 March 2026 02:57:31 +0000 (0:00:00.997) 0:03:10.892 ******* 2026-03-25 02:57:31.686596 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-25 02:57:31.686604 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-25 02:57:31.686610 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:31.686635 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-25 02:57:51.923628 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-25 02:57:51.923757 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:51.923771 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-25 02:57:51.923803 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-25 02:57:51.923812 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:51.923819 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.923824 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:51.923828 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:51.923831 | orchestrator | 2026-03-25 02:57:51.923836 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 02:57:51.923842 | orchestrator | Wednesday 25 March 2026 02:57:32 +0000 (0:00:00.769) 0:03:11.661 ******* 2026-03-25 02:57:51.923846 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:51.923850 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:51.923853 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:51.923857 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.923861 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:51.923865 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:51.923868 | orchestrator | 2026-03-25 02:57:51.923873 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 02:57:51.923876 | orchestrator | Wednesday 25 March 2026 02:57:33 +0000 (0:00:00.989) 0:03:12.650 ******* 2026-03-25 02:57:51.923880 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:51.923884 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:51.923887 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:51.923891 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.923895 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:51.923898 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:51.923902 | orchestrator | 2026-03-25 02:57:51.923906 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 02:57:51.923923 | orchestrator | Wednesday 25 March 2026 02:57:34 +0000 (0:00:00.851) 0:03:13.502 ******* 2026-03-25 02:57:51.923927 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:51.923931 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:51.923934 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:51.923938 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.923941 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:51.923945 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:51.923949 | orchestrator | 2026-03-25 02:57:51.923953 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 02:57:51.923957 | orchestrator | Wednesday 25 March 2026 02:57:35 +0000 (0:00:01.102) 0:03:14.604 ******* 2026-03-25 02:57:51.923961 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:51.923965 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:51.923968 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:51.923972 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.923976 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:51.923979 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:51.923983 | orchestrator | 2026-03-25 02:57:51.923987 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 02:57:51.923990 | orchestrator | Wednesday 25 March 2026 02:57:36 +0000 (0:00:01.007) 0:03:15.612 ******* 2026-03-25 02:57:51.923994 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:51.923998 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:51.924002 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:51.924005 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.924009 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:51.924012 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:51.924074 | orchestrator | 2026-03-25 02:57:51.924078 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 02:57:51.924082 | orchestrator | Wednesday 25 March 2026 02:57:36 +0000 (0:00:00.744) 0:03:16.356 ******* 2026-03-25 02:57:51.924086 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:51.924091 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:51.924094 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:51.924098 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.924102 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:51.924105 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:51.924109 | orchestrator | 2026-03-25 02:57:51.924113 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 02:57:51.924117 | orchestrator | Wednesday 25 March 2026 02:57:37 +0000 (0:00:01.048) 0:03:17.405 ******* 2026-03-25 02:57:51.924121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:57:51.924125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:57:51.924129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:57:51.924133 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:51.924137 | orchestrator | 2026-03-25 02:57:51.924141 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 02:57:51.924144 | orchestrator | Wednesday 25 March 2026 02:57:38 +0000 (0:00:00.563) 0:03:17.969 ******* 2026-03-25 02:57:51.924162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:57:51.924166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:57:51.924170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:57:51.924175 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:51.924179 | orchestrator | 2026-03-25 02:57:51.924183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 02:57:51.924189 | orchestrator | Wednesday 25 March 2026 02:57:39 +0000 (0:00:00.494) 0:03:18.463 ******* 2026-03-25 02:57:51.924195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:57:51.924203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:57:51.924211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:57:51.924217 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:51.924224 | orchestrator | 2026-03-25 02:57:51.924230 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 02:57:51.924236 | orchestrator | Wednesday 25 March 2026 02:57:39 +0000 (0:00:00.491) 0:03:18.954 ******* 2026-03-25 02:57:51.924242 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:57:51.924248 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:57:51.924253 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:57:51.924259 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.924265 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:51.924271 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:51.924277 | orchestrator | 2026-03-25 02:57:51.924283 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 02:57:51.924289 | orchestrator | Wednesday 25 March 2026 02:57:40 +0000 (0:00:00.682) 0:03:19.637 ******* 2026-03-25 02:57:51.924296 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 02:57:51.924303 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 02:57:51.924309 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 02:57:51.924316 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-25 02:57:51.924322 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.924326 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-25 02:57:51.924331 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:57:51.924335 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-25 02:57:51.924339 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:57:51.924344 | orchestrator | 2026-03-25 02:57:51.924348 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 02:57:51.924357 | orchestrator | Wednesday 25 March 2026 02:57:42 +0000 (0:00:02.078) 0:03:21.716 ******* 2026-03-25 02:57:51.924362 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:57:51.924366 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:57:51.924370 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:57:51.924374 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:57:51.924377 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:57:51.924381 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:57:51.924385 | orchestrator | 2026-03-25 02:57:51.924388 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-25 02:57:51.924392 | orchestrator | Wednesday 25 March 2026 02:57:45 +0000 (0:00:02.872) 0:03:24.589 ******* 2026-03-25 02:57:51.924396 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:57:51.924404 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:57:51.924407 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:57:51.924411 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:57:51.924415 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:57:51.924418 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:57:51.924422 | orchestrator | 2026-03-25 02:57:51.924426 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-25 02:57:51.924430 | orchestrator | Wednesday 25 March 2026 02:57:46 +0000 (0:00:01.072) 0:03:25.661 ******* 2026-03-25 02:57:51.924433 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:57:51.924437 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:57:51.924441 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:57:51.924445 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:57:51.924449 | orchestrator | 2026-03-25 02:57:51.924453 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-25 02:57:51.924456 | orchestrator | Wednesday 25 March 2026 02:57:47 +0000 (0:00:01.264) 0:03:26.926 ******* 2026-03-25 02:57:51.924460 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:57:51.924464 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:57:51.924467 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:57:51.924471 | orchestrator | 2026-03-25 02:57:51.924475 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-25 02:57:51.924479 | orchestrator | Wednesday 25 March 2026 02:57:47 +0000 (0:00:00.377) 0:03:27.303 ******* 2026-03-25 02:57:51.924482 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:57:51.924486 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:57:51.924490 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:57:51.924493 | orchestrator | 2026-03-25 02:57:51.924497 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-25 02:57:51.924501 | orchestrator | Wednesday 25 March 2026 02:57:49 +0000 (0:00:01.569) 0:03:28.872 ******* 2026-03-25 02:57:51.924504 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 02:57:51.924508 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 02:57:51.924512 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 02:57:51.924515 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.924519 | orchestrator | 2026-03-25 02:57:51.924523 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-25 02:57:51.924526 | orchestrator | Wednesday 25 March 2026 02:57:50 +0000 (0:00:00.848) 0:03:29.721 ******* 2026-03-25 02:57:51.924530 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:57:51.924534 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:57:51.924538 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:57:51.924541 | orchestrator | 2026-03-25 02:57:51.924545 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-25 02:57:51.924549 | orchestrator | Wednesday 25 March 2026 02:57:50 +0000 (0:00:00.376) 0:03:30.098 ******* 2026-03-25 02:57:51.924553 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:57:51.924560 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:10.299092 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:10.299217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:58:10.299231 | orchestrator | 2026-03-25 02:58:10.299265 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-25 02:58:10.299275 | orchestrator | Wednesday 25 March 2026 02:57:51 +0000 (0:00:01.268) 0:03:31.367 ******* 2026-03-25 02:58:10.299283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:58:10.299291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:58:10.299300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:58:10.299306 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299310 | orchestrator | 2026-03-25 02:58:10.299314 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-25 02:58:10.299318 | orchestrator | Wednesday 25 March 2026 02:57:52 +0000 (0:00:00.463) 0:03:31.831 ******* 2026-03-25 02:58:10.299322 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299326 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:58:10.299329 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:58:10.299333 | orchestrator | 2026-03-25 02:58:10.299337 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-25 02:58:10.299341 | orchestrator | Wednesday 25 March 2026 02:57:52 +0000 (0:00:00.367) 0:03:32.198 ******* 2026-03-25 02:58:10.299345 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299348 | orchestrator | 2026-03-25 02:58:10.299352 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-25 02:58:10.299356 | orchestrator | Wednesday 25 March 2026 02:57:53 +0000 (0:00:00.274) 0:03:32.472 ******* 2026-03-25 02:58:10.299360 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299363 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:58:10.299368 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:58:10.299372 | orchestrator | 2026-03-25 02:58:10.299376 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-25 02:58:10.299380 | orchestrator | Wednesday 25 March 2026 02:57:53 +0000 (0:00:00.346) 0:03:32.819 ******* 2026-03-25 02:58:10.299384 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299388 | orchestrator | 2026-03-25 02:58:10.299392 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-25 02:58:10.299396 | orchestrator | Wednesday 25 March 2026 02:57:54 +0000 (0:00:00.824) 0:03:33.643 ******* 2026-03-25 02:58:10.299399 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299404 | orchestrator | 2026-03-25 02:58:10.299408 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-25 02:58:10.299412 | orchestrator | Wednesday 25 March 2026 02:57:54 +0000 (0:00:00.262) 0:03:33.906 ******* 2026-03-25 02:58:10.299416 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299421 | orchestrator | 2026-03-25 02:58:10.299425 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-25 02:58:10.299429 | orchestrator | Wednesday 25 March 2026 02:57:54 +0000 (0:00:00.163) 0:03:34.069 ******* 2026-03-25 02:58:10.299459 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299463 | orchestrator | 2026-03-25 02:58:10.299467 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-25 02:58:10.299470 | orchestrator | Wednesday 25 March 2026 02:57:54 +0000 (0:00:00.286) 0:03:34.356 ******* 2026-03-25 02:58:10.299474 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299478 | orchestrator | 2026-03-25 02:58:10.299482 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-25 02:58:10.299486 | orchestrator | Wednesday 25 March 2026 02:57:55 +0000 (0:00:00.255) 0:03:34.611 ******* 2026-03-25 02:58:10.299490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:58:10.299494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:58:10.299497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:58:10.299507 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299511 | orchestrator | 2026-03-25 02:58:10.299515 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-25 02:58:10.299519 | orchestrator | Wednesday 25 March 2026 02:57:55 +0000 (0:00:00.484) 0:03:35.095 ******* 2026-03-25 02:58:10.299522 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299526 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:58:10.299530 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:58:10.299533 | orchestrator | 2026-03-25 02:58:10.299537 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-25 02:58:10.299541 | orchestrator | Wednesday 25 March 2026 02:57:55 +0000 (0:00:00.331) 0:03:35.426 ******* 2026-03-25 02:58:10.299545 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299549 | orchestrator | 2026-03-25 02:58:10.299552 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-25 02:58:10.299556 | orchestrator | Wednesday 25 March 2026 02:57:56 +0000 (0:00:00.251) 0:03:35.678 ******* 2026-03-25 02:58:10.299560 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299563 | orchestrator | 2026-03-25 02:58:10.299567 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-25 02:58:10.299571 | orchestrator | Wednesday 25 March 2026 02:57:56 +0000 (0:00:00.241) 0:03:35.920 ******* 2026-03-25 02:58:10.299575 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:10.299579 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:10.299582 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:10.299586 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:58:10.299590 | orchestrator | 2026-03-25 02:58:10.299594 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-25 02:58:10.299597 | orchestrator | Wednesday 25 March 2026 02:57:57 +0000 (0:00:01.195) 0:03:37.115 ******* 2026-03-25 02:58:10.299601 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:58:10.299606 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:58:10.299610 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:58:10.299614 | orchestrator | 2026-03-25 02:58:10.299630 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-25 02:58:10.299634 | orchestrator | Wednesday 25 March 2026 02:57:58 +0000 (0:00:00.360) 0:03:37.475 ******* 2026-03-25 02:58:10.299639 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:58:10.299643 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:58:10.299666 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:58:10.299673 | orchestrator | 2026-03-25 02:58:10.299680 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-25 02:58:10.299684 | orchestrator | Wednesday 25 March 2026 02:57:59 +0000 (0:00:01.536) 0:03:39.012 ******* 2026-03-25 02:58:10.299688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:58:10.299693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:58:10.299697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:58:10.299702 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299706 | orchestrator | 2026-03-25 02:58:10.299710 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-25 02:58:10.299714 | orchestrator | Wednesday 25 March 2026 02:58:00 +0000 (0:00:00.707) 0:03:39.719 ******* 2026-03-25 02:58:10.299719 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:58:10.299734 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:58:10.299750 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:58:10.299756 | orchestrator | 2026-03-25 02:58:10.299762 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-25 02:58:10.299768 | orchestrator | Wednesday 25 March 2026 02:58:00 +0000 (0:00:00.380) 0:03:40.099 ******* 2026-03-25 02:58:10.299775 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:10.299781 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:10.299788 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:10.299802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 02:58:10.299809 | orchestrator | 2026-03-25 02:58:10.299816 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-25 02:58:10.299820 | orchestrator | Wednesday 25 March 2026 02:58:01 +0000 (0:00:01.187) 0:03:41.287 ******* 2026-03-25 02:58:10.299824 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:58:10.299829 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:58:10.299834 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:58:10.299840 | orchestrator | 2026-03-25 02:58:10.299846 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-25 02:58:10.299853 | orchestrator | Wednesday 25 March 2026 02:58:02 +0000 (0:00:00.401) 0:03:41.689 ******* 2026-03-25 02:58:10.299860 | orchestrator | changed: [testbed-node-3] 2026-03-25 02:58:10.299869 | orchestrator | changed: [testbed-node-4] 2026-03-25 02:58:10.299875 | orchestrator | changed: [testbed-node-5] 2026-03-25 02:58:10.299881 | orchestrator | 2026-03-25 02:58:10.299887 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-25 02:58:10.299894 | orchestrator | Wednesday 25 March 2026 02:58:03 +0000 (0:00:01.210) 0:03:42.899 ******* 2026-03-25 02:58:10.299900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 02:58:10.299906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 02:58:10.299918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 02:58:10.299924 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299930 | orchestrator | 2026-03-25 02:58:10.299937 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-25 02:58:10.299943 | orchestrator | Wednesday 25 March 2026 02:58:04 +0000 (0:00:01.099) 0:03:43.999 ******* 2026-03-25 02:58:10.299949 | orchestrator | ok: [testbed-node-3] 2026-03-25 02:58:10.299956 | orchestrator | ok: [testbed-node-4] 2026-03-25 02:58:10.299963 | orchestrator | ok: [testbed-node-5] 2026-03-25 02:58:10.299968 | orchestrator | 2026-03-25 02:58:10.299975 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-25 02:58:10.299981 | orchestrator | Wednesday 25 March 2026 02:58:05 +0000 (0:00:00.666) 0:03:44.666 ******* 2026-03-25 02:58:10.299988 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.299995 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:58:10.300001 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:58:10.300007 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:10.300013 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:10.300017 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:10.300021 | orchestrator | 2026-03-25 02:58:10.300025 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-25 02:58:10.300029 | orchestrator | Wednesday 25 March 2026 02:58:05 +0000 (0:00:00.789) 0:03:45.455 ******* 2026-03-25 02:58:10.300035 | orchestrator | skipping: [testbed-node-3] 2026-03-25 02:58:10.300041 | orchestrator | skipping: [testbed-node-4] 2026-03-25 02:58:10.300047 | orchestrator | skipping: [testbed-node-5] 2026-03-25 02:58:10.300053 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:58:10.300059 | orchestrator | 2026-03-25 02:58:10.300066 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-25 02:58:10.300072 | orchestrator | Wednesday 25 March 2026 02:58:07 +0000 (0:00:01.326) 0:03:46.782 ******* 2026-03-25 02:58:10.300078 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:10.300085 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:10.300091 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:10.300098 | orchestrator | 2026-03-25 02:58:10.300104 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-25 02:58:10.300110 | orchestrator | Wednesday 25 March 2026 02:58:07 +0000 (0:00:00.457) 0:03:47.240 ******* 2026-03-25 02:58:10.300117 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:58:10.300126 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:58:10.300130 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:58:10.300134 | orchestrator | 2026-03-25 02:58:10.300137 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-25 02:58:10.300141 | orchestrator | Wednesday 25 March 2026 02:58:09 +0000 (0:00:01.238) 0:03:48.479 ******* 2026-03-25 02:58:10.300145 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 02:58:10.300149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 02:58:10.300159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 02:58:28.755408 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.755507 | orchestrator | 2026-03-25 02:58:28.755523 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-25 02:58:28.755535 | orchestrator | Wednesday 25 March 2026 02:58:10 +0000 (0:00:01.261) 0:03:49.740 ******* 2026-03-25 02:58:28.755543 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.755552 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.755561 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.755569 | orchestrator | 2026-03-25 02:58:28.755578 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-25 02:58:28.755587 | orchestrator | 2026-03-25 02:58:28.755596 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 02:58:28.755605 | orchestrator | Wednesday 25 March 2026 02:58:10 +0000 (0:00:00.695) 0:03:50.435 ******* 2026-03-25 02:58:28.755615 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:58:28.755625 | orchestrator | 2026-03-25 02:58:28.755634 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 02:58:28.755644 | orchestrator | Wednesday 25 March 2026 02:58:11 +0000 (0:00:00.871) 0:03:51.307 ******* 2026-03-25 02:58:28.755654 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:58:28.755725 | orchestrator | 2026-03-25 02:58:28.755736 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 02:58:28.755746 | orchestrator | Wednesday 25 March 2026 02:58:12 +0000 (0:00:00.704) 0:03:52.011 ******* 2026-03-25 02:58:28.755756 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.755765 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.755774 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.755784 | orchestrator | 2026-03-25 02:58:28.755794 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 02:58:28.755804 | orchestrator | Wednesday 25 March 2026 02:58:13 +0000 (0:00:00.760) 0:03:52.771 ******* 2026-03-25 02:58:28.755814 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.755821 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.755827 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.755833 | orchestrator | 2026-03-25 02:58:28.755838 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 02:58:28.755844 | orchestrator | Wednesday 25 March 2026 02:58:14 +0000 (0:00:00.699) 0:03:53.471 ******* 2026-03-25 02:58:28.755850 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.755856 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.755862 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.755867 | orchestrator | 2026-03-25 02:58:28.755873 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 02:58:28.755879 | orchestrator | Wednesday 25 March 2026 02:58:14 +0000 (0:00:00.388) 0:03:53.860 ******* 2026-03-25 02:58:28.755885 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.755890 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.755911 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.755917 | orchestrator | 2026-03-25 02:58:28.755922 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 02:58:28.755928 | orchestrator | Wednesday 25 March 2026 02:58:14 +0000 (0:00:00.401) 0:03:54.261 ******* 2026-03-25 02:58:28.755955 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.755962 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.755968 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.755974 | orchestrator | 2026-03-25 02:58:28.755981 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 02:58:28.755988 | orchestrator | Wednesday 25 March 2026 02:58:15 +0000 (0:00:00.748) 0:03:55.010 ******* 2026-03-25 02:58:28.755994 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.756001 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.756007 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.756014 | orchestrator | 2026-03-25 02:58:28.756020 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 02:58:28.756027 | orchestrator | Wednesday 25 March 2026 02:58:16 +0000 (0:00:00.636) 0:03:55.646 ******* 2026-03-25 02:58:28.756034 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.756040 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.756046 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.756053 | orchestrator | 2026-03-25 02:58:28.756059 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 02:58:28.756065 | orchestrator | Wednesday 25 March 2026 02:58:16 +0000 (0:00:00.384) 0:03:56.031 ******* 2026-03-25 02:58:28.756072 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.756078 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.756085 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.756091 | orchestrator | 2026-03-25 02:58:28.756097 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 02:58:28.756104 | orchestrator | Wednesday 25 March 2026 02:58:17 +0000 (0:00:00.753) 0:03:56.784 ******* 2026-03-25 02:58:28.756111 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.756117 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.756124 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.756130 | orchestrator | 2026-03-25 02:58:28.756136 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 02:58:28.756142 | orchestrator | Wednesday 25 March 2026 02:58:18 +0000 (0:00:00.763) 0:03:57.547 ******* 2026-03-25 02:58:28.756149 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.756156 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.756162 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.756169 | orchestrator | 2026-03-25 02:58:28.756175 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 02:58:28.756181 | orchestrator | Wednesday 25 March 2026 02:58:18 +0000 (0:00:00.643) 0:03:58.190 ******* 2026-03-25 02:58:28.756188 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.756195 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.756202 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.756208 | orchestrator | 2026-03-25 02:58:28.756215 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 02:58:28.756221 | orchestrator | Wednesday 25 March 2026 02:58:19 +0000 (0:00:00.393) 0:03:58.584 ******* 2026-03-25 02:58:28.756243 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.756250 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.756257 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.756263 | orchestrator | 2026-03-25 02:58:28.756270 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 02:58:28.756276 | orchestrator | Wednesday 25 March 2026 02:58:19 +0000 (0:00:00.352) 0:03:58.936 ******* 2026-03-25 02:58:28.756283 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.756289 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.756296 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.756303 | orchestrator | 2026-03-25 02:58:28.756312 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 02:58:28.756322 | orchestrator | Wednesday 25 March 2026 02:58:19 +0000 (0:00:00.359) 0:03:59.295 ******* 2026-03-25 02:58:28.756331 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.756348 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.756359 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.756369 | orchestrator | 2026-03-25 02:58:28.756379 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 02:58:28.756388 | orchestrator | Wednesday 25 March 2026 02:58:20 +0000 (0:00:00.657) 0:03:59.953 ******* 2026-03-25 02:58:28.756398 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.756406 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.756412 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.756418 | orchestrator | 2026-03-25 02:58:28.756423 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 02:58:28.756429 | orchestrator | Wednesday 25 March 2026 02:58:20 +0000 (0:00:00.353) 0:04:00.306 ******* 2026-03-25 02:58:28.756434 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.756440 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:58:28.756446 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:58:28.756453 | orchestrator | 2026-03-25 02:58:28.756461 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 02:58:28.756473 | orchestrator | Wednesday 25 March 2026 02:58:21 +0000 (0:00:00.349) 0:04:00.655 ******* 2026-03-25 02:58:28.756497 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.756506 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.756514 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.756524 | orchestrator | 2026-03-25 02:58:28.756533 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 02:58:28.756542 | orchestrator | Wednesday 25 March 2026 02:58:21 +0000 (0:00:00.380) 0:04:01.036 ******* 2026-03-25 02:58:28.756551 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.756560 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.756568 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.756577 | orchestrator | 2026-03-25 02:58:28.756585 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 02:58:28.756594 | orchestrator | Wednesday 25 March 2026 02:58:22 +0000 (0:00:00.690) 0:04:01.726 ******* 2026-03-25 02:58:28.756603 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.756611 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.756620 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.756628 | orchestrator | 2026-03-25 02:58:28.756644 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-25 02:58:28.756654 | orchestrator | Wednesday 25 March 2026 02:58:22 +0000 (0:00:00.660) 0:04:02.386 ******* 2026-03-25 02:58:28.756694 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.756705 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.756715 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.756724 | orchestrator | 2026-03-25 02:58:28.756733 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-25 02:58:28.756743 | orchestrator | Wednesday 25 March 2026 02:58:23 +0000 (0:00:00.374) 0:04:02.761 ******* 2026-03-25 02:58:28.756751 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:58:28.756757 | orchestrator | 2026-03-25 02:58:28.756763 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-25 02:58:28.756769 | orchestrator | Wednesday 25 March 2026 02:58:24 +0000 (0:00:01.010) 0:04:03.771 ******* 2026-03-25 02:58:28.756774 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:58:28.756780 | orchestrator | 2026-03-25 02:58:28.756785 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-25 02:58:28.756791 | orchestrator | Wednesday 25 March 2026 02:58:24 +0000 (0:00:00.175) 0:04:03.946 ******* 2026-03-25 02:58:28.756797 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-25 02:58:28.756803 | orchestrator | 2026-03-25 02:58:28.756808 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-25 02:58:28.756814 | orchestrator | Wednesday 25 March 2026 02:58:25 +0000 (0:00:01.156) 0:04:05.102 ******* 2026-03-25 02:58:28.756826 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.756832 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.756837 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.756843 | orchestrator | 2026-03-25 02:58:28.756849 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-25 02:58:28.756854 | orchestrator | Wednesday 25 March 2026 02:58:26 +0000 (0:00:00.399) 0:04:05.502 ******* 2026-03-25 02:58:28.756860 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:58:28.756865 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:58:28.756874 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:58:28.756886 | orchestrator | 2026-03-25 02:58:28.756900 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-25 02:58:28.756909 | orchestrator | Wednesday 25 March 2026 02:58:26 +0000 (0:00:00.684) 0:04:06.187 ******* 2026-03-25 02:58:28.756919 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:58:28.756928 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:58:28.756937 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:58:28.756947 | orchestrator | 2026-03-25 02:58:28.756955 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-25 02:58:28.756964 | orchestrator | Wednesday 25 March 2026 02:58:27 +0000 (0:00:01.217) 0:04:07.404 ******* 2026-03-25 02:58:28.756972 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:58:28.756981 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:58:28.756989 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:58:28.756997 | orchestrator | 2026-03-25 02:58:28.757016 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-25 02:59:36.702466 | orchestrator | Wednesday 25 March 2026 02:58:28 +0000 (0:00:00.788) 0:04:08.192 ******* 2026-03-25 02:59:36.702571 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:59:36.702586 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:59:36.702595 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:59:36.702605 | orchestrator | 2026-03-25 02:59:36.702615 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-25 02:59:36.702624 | orchestrator | Wednesday 25 March 2026 02:58:29 +0000 (0:00:00.724) 0:04:08.917 ******* 2026-03-25 02:59:36.702633 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:36.702643 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:36.702651 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:36.702660 | orchestrator | 2026-03-25 02:59:36.702669 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-25 02:59:36.702677 | orchestrator | Wednesday 25 March 2026 02:58:30 +0000 (0:00:01.031) 0:04:09.948 ******* 2026-03-25 02:59:36.702737 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:59:36.702747 | orchestrator | 2026-03-25 02:59:36.702756 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-25 02:59:36.702766 | orchestrator | Wednesday 25 March 2026 02:58:31 +0000 (0:00:01.301) 0:04:11.250 ******* 2026-03-25 02:59:36.702775 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:36.702784 | orchestrator | 2026-03-25 02:59:36.702793 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-25 02:59:36.702802 | orchestrator | Wednesday 25 March 2026 02:58:32 +0000 (0:00:00.728) 0:04:11.978 ******* 2026-03-25 02:59:36.702811 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-25 02:59:36.702821 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 02:59:36.702830 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 02:59:36.702840 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 02:59:36.702849 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-25 02:59:36.702859 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 02:59:36.702867 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 02:59:36.702877 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-25 02:59:36.702886 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 02:59:36.702918 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-25 02:59:36.702928 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-25 02:59:36.702937 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-25 02:59:36.702946 | orchestrator | 2026-03-25 02:59:36.702955 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-25 02:59:36.702964 | orchestrator | Wednesday 25 March 2026 02:58:35 +0000 (0:00:02.975) 0:04:14.954 ******* 2026-03-25 02:59:36.702973 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:59:36.702982 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:59:36.703007 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:59:36.703019 | orchestrator | 2026-03-25 02:59:36.703029 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-25 02:59:36.703038 | orchestrator | Wednesday 25 March 2026 02:58:36 +0000 (0:00:01.197) 0:04:16.151 ******* 2026-03-25 02:59:36.703049 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:36.703059 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:36.703069 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:36.703079 | orchestrator | 2026-03-25 02:59:36.703088 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-25 02:59:36.703098 | orchestrator | Wednesday 25 March 2026 02:58:37 +0000 (0:00:00.650) 0:04:16.801 ******* 2026-03-25 02:59:36.703109 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:36.703119 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:36.703128 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:36.703138 | orchestrator | 2026-03-25 02:59:36.703147 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-25 02:59:36.703157 | orchestrator | Wednesday 25 March 2026 02:58:37 +0000 (0:00:00.382) 0:04:17.184 ******* 2026-03-25 02:59:36.703167 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:59:36.703177 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:59:36.703189 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:59:36.703204 | orchestrator | 2026-03-25 02:59:36.703226 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-25 02:59:36.703241 | orchestrator | Wednesday 25 March 2026 02:58:39 +0000 (0:00:01.515) 0:04:18.699 ******* 2026-03-25 02:59:36.703255 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:59:36.703269 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:59:36.703283 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:59:36.703297 | orchestrator | 2026-03-25 02:59:36.703310 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-25 02:59:36.703323 | orchestrator | Wednesday 25 March 2026 02:58:40 +0000 (0:00:01.336) 0:04:20.035 ******* 2026-03-25 02:59:36.703335 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:36.703350 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:36.703365 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:36.703418 | orchestrator | 2026-03-25 02:59:36.703459 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-25 02:59:36.703470 | orchestrator | Wednesday 25 March 2026 02:58:41 +0000 (0:00:00.640) 0:04:20.676 ******* 2026-03-25 02:59:36.703478 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:59:36.703488 | orchestrator | 2026-03-25 02:59:36.703497 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-25 02:59:36.703506 | orchestrator | Wednesday 25 March 2026 02:58:41 +0000 (0:00:00.668) 0:04:21.344 ******* 2026-03-25 02:59:36.703514 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:36.703523 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:36.703532 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:36.703540 | orchestrator | 2026-03-25 02:59:36.703549 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-25 02:59:36.703577 | orchestrator | Wednesday 25 March 2026 02:58:42 +0000 (0:00:00.366) 0:04:21.711 ******* 2026-03-25 02:59:36.703586 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:36.703609 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:36.703618 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:36.703627 | orchestrator | 2026-03-25 02:59:36.703635 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-25 02:59:36.703644 | orchestrator | Wednesday 25 March 2026 02:58:42 +0000 (0:00:00.624) 0:04:22.335 ******* 2026-03-25 02:59:36.703653 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:59:36.703662 | orchestrator | 2026-03-25 02:59:36.703671 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-25 02:59:36.703680 | orchestrator | Wednesday 25 March 2026 02:58:43 +0000 (0:00:00.592) 0:04:22.928 ******* 2026-03-25 02:59:36.703792 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:59:36.703801 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:59:36.703809 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:59:36.703818 | orchestrator | 2026-03-25 02:59:36.703826 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-25 02:59:36.703835 | orchestrator | Wednesday 25 March 2026 02:58:45 +0000 (0:00:02.028) 0:04:24.957 ******* 2026-03-25 02:59:36.703844 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:59:36.703853 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:59:36.703861 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:59:36.703870 | orchestrator | 2026-03-25 02:59:36.703878 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-25 02:59:36.703887 | orchestrator | Wednesday 25 March 2026 02:58:46 +0000 (0:00:01.457) 0:04:26.414 ******* 2026-03-25 02:59:36.703895 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:59:36.703904 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:59:36.703912 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:59:36.703921 | orchestrator | 2026-03-25 02:59:36.703929 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-25 02:59:36.703938 | orchestrator | Wednesday 25 March 2026 02:58:48 +0000 (0:00:01.743) 0:04:28.158 ******* 2026-03-25 02:59:36.703946 | orchestrator | changed: [testbed-node-0] 2026-03-25 02:59:36.703955 | orchestrator | changed: [testbed-node-1] 2026-03-25 02:59:36.703963 | orchestrator | changed: [testbed-node-2] 2026-03-25 02:59:36.703971 | orchestrator | 2026-03-25 02:59:36.703980 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-25 02:59:36.703988 | orchestrator | Wednesday 25 March 2026 02:58:50 +0000 (0:00:01.867) 0:04:30.026 ******* 2026-03-25 02:59:36.703997 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:59:36.704006 | orchestrator | 2026-03-25 02:59:36.704014 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-25 02:59:36.704023 | orchestrator | Wednesday 25 March 2026 02:58:51 +0000 (0:00:00.943) 0:04:30.969 ******* 2026-03-25 02:59:36.704039 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-25 02:59:36.704048 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:36.704056 | orchestrator | 2026-03-25 02:59:36.704065 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-25 02:59:36.704073 | orchestrator | Wednesday 25 March 2026 02:59:13 +0000 (0:00:21.839) 0:04:52.809 ******* 2026-03-25 02:59:36.704082 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:36.704091 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:36.704100 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:36.704108 | orchestrator | 2026-03-25 02:59:36.704117 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-25 02:59:36.704125 | orchestrator | Wednesday 25 March 2026 02:59:22 +0000 (0:00:08.932) 0:05:01.742 ******* 2026-03-25 02:59:36.704134 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:36.704142 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:36.704151 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:36.704167 | orchestrator | 2026-03-25 02:59:36.704176 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-25 02:59:36.704184 | orchestrator | Wednesday 25 March 2026 02:59:22 +0000 (0:00:00.378) 0:05:02.120 ******* 2026-03-25 02:59:36.704195 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af44fbbfe420c581c919be962b6edb1861836b12'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-25 02:59:36.704207 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af44fbbfe420c581c919be962b6edb1861836b12'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-25 02:59:36.704217 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af44fbbfe420c581c919be962b6edb1861836b12'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-25 02:59:36.704236 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af44fbbfe420c581c919be962b6edb1861836b12'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-25 02:59:51.761852 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af44fbbfe420c581c919be962b6edb1861836b12'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-25 02:59:51.761969 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af44fbbfe420c581c919be962b6edb1861836b12'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__af44fbbfe420c581c919be962b6edb1861836b12'}])  2026-03-25 02:59:51.761988 | orchestrator | 2026-03-25 02:59:51.762001 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-25 02:59:51.762013 | orchestrator | Wednesday 25 March 2026 02:59:36 +0000 (0:00:14.018) 0:05:16.139 ******* 2026-03-25 02:59:51.762121 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.762134 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.762145 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.762156 | orchestrator | 2026-03-25 02:59:51.762167 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-25 02:59:51.762178 | orchestrator | Wednesday 25 March 2026 02:59:37 +0000 (0:00:00.418) 0:05:16.558 ******* 2026-03-25 02:59:51.762190 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:59:51.762201 | orchestrator | 2026-03-25 02:59:51.762212 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-25 02:59:51.762223 | orchestrator | Wednesday 25 March 2026 02:59:37 +0000 (0:00:00.868) 0:05:17.426 ******* 2026-03-25 02:59:51.762246 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:51.762262 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:51.762282 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:51.762306 | orchestrator | 2026-03-25 02:59:51.762330 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-25 02:59:51.762384 | orchestrator | Wednesday 25 March 2026 02:59:38 +0000 (0:00:00.378) 0:05:17.804 ******* 2026-03-25 02:59:51.762424 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.762444 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.762461 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.762478 | orchestrator | 2026-03-25 02:59:51.762495 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-25 02:59:51.762547 | orchestrator | Wednesday 25 March 2026 02:59:38 +0000 (0:00:00.419) 0:05:18.223 ******* 2026-03-25 02:59:51.762565 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 02:59:51.762584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 02:59:51.762601 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 02:59:51.762620 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.762634 | orchestrator | 2026-03-25 02:59:51.762645 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-25 02:59:51.762656 | orchestrator | Wednesday 25 March 2026 02:59:39 +0000 (0:00:01.065) 0:05:19.289 ******* 2026-03-25 02:59:51.762667 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:51.762678 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:51.762738 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:51.762759 | orchestrator | 2026-03-25 02:59:51.762786 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-25 02:59:51.762807 | orchestrator | 2026-03-25 02:59:51.762825 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 02:59:51.762844 | orchestrator | Wednesday 25 March 2026 02:59:40 +0000 (0:00:00.988) 0:05:20.278 ******* 2026-03-25 02:59:51.762863 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:59:51.762881 | orchestrator | 2026-03-25 02:59:51.762899 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 02:59:51.762915 | orchestrator | Wednesday 25 March 2026 02:59:41 +0000 (0:00:00.603) 0:05:20.881 ******* 2026-03-25 02:59:51.762932 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 02:59:51.762951 | orchestrator | 2026-03-25 02:59:51.762968 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 02:59:51.762986 | orchestrator | Wednesday 25 March 2026 02:59:42 +0000 (0:00:00.874) 0:05:21.756 ******* 2026-03-25 02:59:51.763006 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:51.763024 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:51.763042 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:51.763059 | orchestrator | 2026-03-25 02:59:51.763078 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 02:59:51.763095 | orchestrator | Wednesday 25 March 2026 02:59:43 +0000 (0:00:00.738) 0:05:22.495 ******* 2026-03-25 02:59:51.763112 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.763131 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.763149 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.763166 | orchestrator | 2026-03-25 02:59:51.763185 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 02:59:51.763203 | orchestrator | Wednesday 25 March 2026 02:59:43 +0000 (0:00:00.362) 0:05:22.857 ******* 2026-03-25 02:59:51.763222 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.763242 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.763262 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.763282 | orchestrator | 2026-03-25 02:59:51.763333 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 02:59:51.763355 | orchestrator | Wednesday 25 March 2026 02:59:44 +0000 (0:00:00.664) 0:05:23.522 ******* 2026-03-25 02:59:51.763376 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.763390 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.763420 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.763439 | orchestrator | 2026-03-25 02:59:51.763457 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 02:59:51.763474 | orchestrator | Wednesday 25 March 2026 02:59:44 +0000 (0:00:00.392) 0:05:23.914 ******* 2026-03-25 02:59:51.763485 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:51.763496 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:51.763507 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:51.763517 | orchestrator | 2026-03-25 02:59:51.763528 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 02:59:51.763539 | orchestrator | Wednesday 25 March 2026 02:59:45 +0000 (0:00:00.842) 0:05:24.757 ******* 2026-03-25 02:59:51.763549 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.763560 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.763570 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.763580 | orchestrator | 2026-03-25 02:59:51.763591 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 02:59:51.763602 | orchestrator | Wednesday 25 March 2026 02:59:45 +0000 (0:00:00.341) 0:05:25.099 ******* 2026-03-25 02:59:51.763612 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.763623 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.763633 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.763644 | orchestrator | 2026-03-25 02:59:51.763654 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 02:59:51.763665 | orchestrator | Wednesday 25 March 2026 02:59:46 +0000 (0:00:00.669) 0:05:25.768 ******* 2026-03-25 02:59:51.763675 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:51.763715 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:51.763732 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:51.763745 | orchestrator | 2026-03-25 02:59:51.763757 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 02:59:51.763770 | orchestrator | Wednesday 25 March 2026 02:59:47 +0000 (0:00:00.781) 0:05:26.550 ******* 2026-03-25 02:59:51.763782 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:51.763794 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:51.763805 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:51.763818 | orchestrator | 2026-03-25 02:59:51.763829 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 02:59:51.763841 | orchestrator | Wednesday 25 March 2026 02:59:47 +0000 (0:00:00.756) 0:05:27.306 ******* 2026-03-25 02:59:51.763853 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.763871 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.763899 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.763927 | orchestrator | 2026-03-25 02:59:51.763947 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 02:59:51.763965 | orchestrator | Wednesday 25 March 2026 02:59:48 +0000 (0:00:00.355) 0:05:27.662 ******* 2026-03-25 02:59:51.763984 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:51.764004 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:51.764024 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:51.764043 | orchestrator | 2026-03-25 02:59:51.764061 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 02:59:51.764072 | orchestrator | Wednesday 25 March 2026 02:59:48 +0000 (0:00:00.704) 0:05:28.366 ******* 2026-03-25 02:59:51.764083 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.764093 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.764104 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.764114 | orchestrator | 2026-03-25 02:59:51.764125 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 02:59:51.764135 | orchestrator | Wednesday 25 March 2026 02:59:49 +0000 (0:00:00.351) 0:05:28.717 ******* 2026-03-25 02:59:51.764152 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.764180 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.764233 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.764250 | orchestrator | 2026-03-25 02:59:51.764280 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 02:59:51.764297 | orchestrator | Wednesday 25 March 2026 02:59:49 +0000 (0:00:00.334) 0:05:29.052 ******* 2026-03-25 02:59:51.764314 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.764332 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.764351 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.764370 | orchestrator | 2026-03-25 02:59:51.764388 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 02:59:51.764406 | orchestrator | Wednesday 25 March 2026 02:59:49 +0000 (0:00:00.365) 0:05:29.417 ******* 2026-03-25 02:59:51.764422 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.764432 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.764443 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.764453 | orchestrator | 2026-03-25 02:59:51.764464 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 02:59:51.764475 | orchestrator | Wednesday 25 March 2026 02:59:50 +0000 (0:00:00.672) 0:05:30.089 ******* 2026-03-25 02:59:51.764485 | orchestrator | skipping: [testbed-node-0] 2026-03-25 02:59:51.764496 | orchestrator | skipping: [testbed-node-1] 2026-03-25 02:59:51.764506 | orchestrator | skipping: [testbed-node-2] 2026-03-25 02:59:51.764517 | orchestrator | 2026-03-25 02:59:51.764527 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 02:59:51.764538 | orchestrator | Wednesday 25 March 2026 02:59:51 +0000 (0:00:00.372) 0:05:30.462 ******* 2026-03-25 02:59:51.764548 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:51.764559 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:51.764570 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:51.764580 | orchestrator | 2026-03-25 02:59:51.764591 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 02:59:51.764602 | orchestrator | Wednesday 25 March 2026 02:59:51 +0000 (0:00:00.378) 0:05:30.840 ******* 2026-03-25 02:59:51.764612 | orchestrator | ok: [testbed-node-0] 2026-03-25 02:59:51.764623 | orchestrator | ok: [testbed-node-1] 2026-03-25 02:59:51.764634 | orchestrator | ok: [testbed-node-2] 2026-03-25 02:59:51.764644 | orchestrator | 2026-03-25 02:59:51.764655 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 02:59:51.764681 | orchestrator | Wednesday 25 March 2026 02:59:51 +0000 (0:00:00.362) 0:05:31.202 ******* 2026-03-25 03:00:53.640185 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:00:53.640309 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:00:53.640325 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:00:53.640337 | orchestrator | 2026-03-25 03:00:53.640350 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-25 03:00:53.640362 | orchestrator | Wednesday 25 March 2026 02:59:52 +0000 (0:00:00.994) 0:05:32.197 ******* 2026-03-25 03:00:53.640374 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 03:00:53.640385 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 03:00:53.640397 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 03:00:53.640408 | orchestrator | 2026-03-25 03:00:53.640419 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-25 03:00:53.640430 | orchestrator | Wednesday 25 March 2026 02:59:53 +0000 (0:00:00.759) 0:05:32.957 ******* 2026-03-25 03:00:53.640441 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:00:53.640453 | orchestrator | 2026-03-25 03:00:53.640464 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-25 03:00:53.640475 | orchestrator | Wednesday 25 March 2026 02:59:54 +0000 (0:00:00.876) 0:05:33.834 ******* 2026-03-25 03:00:53.640486 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:00:53.640497 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:00:53.640508 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:00:53.640519 | orchestrator | 2026-03-25 03:00:53.640530 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-25 03:00:53.640568 | orchestrator | Wednesday 25 March 2026 02:59:55 +0000 (0:00:00.766) 0:05:34.600 ******* 2026-03-25 03:00:53.640580 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:00:53.640591 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:00:53.640602 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:00:53.640613 | orchestrator | 2026-03-25 03:00:53.640623 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-25 03:00:53.640635 | orchestrator | Wednesday 25 March 2026 02:59:55 +0000 (0:00:00.385) 0:05:34.986 ******* 2026-03-25 03:00:53.640646 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-25 03:00:53.640658 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-25 03:00:53.640669 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-25 03:00:53.640680 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-25 03:00:53.640691 | orchestrator | 2026-03-25 03:00:53.640752 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-25 03:00:53.640776 | orchestrator | Wednesday 25 March 2026 03:00:05 +0000 (0:00:10.107) 0:05:45.094 ******* 2026-03-25 03:00:53.640797 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:00:53.640816 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:00:53.640834 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:00:53.640854 | orchestrator | 2026-03-25 03:00:53.640873 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-25 03:00:53.640894 | orchestrator | Wednesday 25 March 2026 03:00:06 +0000 (0:00:00.418) 0:05:45.512 ******* 2026-03-25 03:00:53.640916 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-25 03:00:53.640938 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-25 03:00:53.640958 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-25 03:00:53.640974 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-25 03:00:53.640987 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:00:53.641021 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:00:53.641034 | orchestrator | 2026-03-25 03:00:53.641046 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-25 03:00:53.641058 | orchestrator | Wednesday 25 March 2026 03:00:08 +0000 (0:00:02.513) 0:05:48.026 ******* 2026-03-25 03:00:53.641070 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-25 03:00:53.641084 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-25 03:00:53.641094 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-25 03:00:53.641105 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-25 03:00:53.641116 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-25 03:00:53.641126 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-25 03:00:53.641137 | orchestrator | 2026-03-25 03:00:53.641148 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-25 03:00:53.641159 | orchestrator | Wednesday 25 March 2026 03:00:09 +0000 (0:00:01.247) 0:05:49.274 ******* 2026-03-25 03:00:53.641170 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:00:53.641181 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:00:53.641192 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:00:53.641203 | orchestrator | 2026-03-25 03:00:53.641214 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-25 03:00:53.641224 | orchestrator | Wednesday 25 March 2026 03:00:10 +0000 (0:00:00.701) 0:05:49.975 ******* 2026-03-25 03:00:53.641235 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:00:53.641245 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:00:53.641256 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:00:53.641267 | orchestrator | 2026-03-25 03:00:53.641278 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-25 03:00:53.641288 | orchestrator | Wednesday 25 March 2026 03:00:10 +0000 (0:00:00.344) 0:05:50.320 ******* 2026-03-25 03:00:53.641320 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:00:53.641348 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:00:53.641371 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:00:53.641389 | orchestrator | 2026-03-25 03:00:53.641408 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-25 03:00:53.641427 | orchestrator | Wednesday 25 March 2026 03:00:11 +0000 (0:00:00.633) 0:05:50.953 ******* 2026-03-25 03:00:53.641444 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:00:53.641462 | orchestrator | 2026-03-25 03:00:53.641508 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-25 03:00:53.641527 | orchestrator | Wednesday 25 March 2026 03:00:12 +0000 (0:00:00.669) 0:05:51.623 ******* 2026-03-25 03:00:53.641544 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:00:53.641561 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:00:53.641581 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:00:53.641601 | orchestrator | 2026-03-25 03:00:53.641621 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-25 03:00:53.641640 | orchestrator | Wednesday 25 March 2026 03:00:12 +0000 (0:00:00.380) 0:05:52.004 ******* 2026-03-25 03:00:53.641660 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:00:53.641680 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:00:53.641701 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:00:53.641748 | orchestrator | 2026-03-25 03:00:53.641769 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-25 03:00:53.641788 | orchestrator | Wednesday 25 March 2026 03:00:13 +0000 (0:00:00.670) 0:05:52.675 ******* 2026-03-25 03:00:53.641806 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:00:53.641825 | orchestrator | 2026-03-25 03:00:53.641839 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-25 03:00:53.641850 | orchestrator | Wednesday 25 March 2026 03:00:13 +0000 (0:00:00.693) 0:05:53.369 ******* 2026-03-25 03:00:53.641860 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:00:53.641871 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:00:53.641881 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:00:53.641892 | orchestrator | 2026-03-25 03:00:53.641903 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-25 03:00:53.641919 | orchestrator | Wednesday 25 March 2026 03:00:15 +0000 (0:00:01.233) 0:05:54.602 ******* 2026-03-25 03:00:53.641944 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:00:53.641969 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:00:53.641986 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:00:53.642003 | orchestrator | 2026-03-25 03:00:53.642228 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-25 03:00:53.642256 | orchestrator | Wednesday 25 March 2026 03:00:16 +0000 (0:00:01.508) 0:05:56.110 ******* 2026-03-25 03:00:53.642268 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:00:53.642279 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:00:53.642290 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:00:53.642301 | orchestrator | 2026-03-25 03:00:53.642312 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-25 03:00:53.642336 | orchestrator | Wednesday 25 March 2026 03:00:18 +0000 (0:00:01.827) 0:05:57.938 ******* 2026-03-25 03:00:53.642347 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:00:53.642358 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:00:53.642369 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:00:53.642379 | orchestrator | 2026-03-25 03:00:53.642390 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-25 03:00:53.642401 | orchestrator | Wednesday 25 March 2026 03:00:20 +0000 (0:00:02.018) 0:05:59.957 ******* 2026-03-25 03:00:53.642411 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:00:53.642422 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:00:53.642433 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-25 03:00:53.642456 | orchestrator | 2026-03-25 03:00:53.642467 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-25 03:00:53.642477 | orchestrator | Wednesday 25 March 2026 03:00:21 +0000 (0:00:00.763) 0:06:00.720 ******* 2026-03-25 03:00:53.642488 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-25 03:00:53.642500 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-25 03:00:53.642510 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-25 03:00:53.642521 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-25 03:00:53.642532 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:00:53.642542 | orchestrator | 2026-03-25 03:00:53.642553 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-25 03:00:53.642564 | orchestrator | Wednesday 25 March 2026 03:00:45 +0000 (0:00:24.202) 0:06:24.923 ******* 2026-03-25 03:00:53.642574 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:00:53.642585 | orchestrator | 2026-03-25 03:00:53.642596 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-25 03:00:53.642606 | orchestrator | Wednesday 25 March 2026 03:00:46 +0000 (0:00:01.204) 0:06:26.127 ******* 2026-03-25 03:00:53.642617 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:00:53.642628 | orchestrator | 2026-03-25 03:00:53.642639 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-25 03:00:53.642650 | orchestrator | Wednesday 25 March 2026 03:00:47 +0000 (0:00:00.400) 0:06:26.528 ******* 2026-03-25 03:00:53.642660 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:00:53.642671 | orchestrator | 2026-03-25 03:00:53.642682 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-25 03:00:53.642692 | orchestrator | Wednesday 25 March 2026 03:00:47 +0000 (0:00:00.171) 0:06:26.700 ******* 2026-03-25 03:00:53.642703 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-25 03:00:53.642800 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-25 03:00:53.642812 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-25 03:00:53.642823 | orchestrator | 2026-03-25 03:00:53.642834 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-25 03:00:53.642862 | orchestrator | Wednesday 25 March 2026 03:00:53 +0000 (0:00:06.378) 0:06:33.079 ******* 2026-03-25 03:01:16.303220 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-25 03:01:16.303351 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-25 03:01:16.303365 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-25 03:01:16.303372 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-25 03:01:16.303379 | orchestrator | 2026-03-25 03:01:16.303387 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-25 03:01:16.303394 | orchestrator | Wednesday 25 March 2026 03:00:58 +0000 (0:00:05.099) 0:06:38.178 ******* 2026-03-25 03:01:16.303400 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:01:16.303407 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:01:16.303413 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:01:16.303420 | orchestrator | 2026-03-25 03:01:16.303426 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-25 03:01:16.303433 | orchestrator | Wednesday 25 March 2026 03:00:59 +0000 (0:00:00.746) 0:06:38.924 ******* 2026-03-25 03:01:16.303439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:01:16.303445 | orchestrator | 2026-03-25 03:01:16.303477 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-25 03:01:16.303483 | orchestrator | Wednesday 25 March 2026 03:01:00 +0000 (0:00:00.609) 0:06:39.533 ******* 2026-03-25 03:01:16.303489 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:01:16.303496 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:01:16.303502 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:01:16.303508 | orchestrator | 2026-03-25 03:01:16.303514 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-25 03:01:16.303520 | orchestrator | Wednesday 25 March 2026 03:01:00 +0000 (0:00:00.687) 0:06:40.221 ******* 2026-03-25 03:01:16.303526 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:01:16.303532 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:01:16.303538 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:01:16.303544 | orchestrator | 2026-03-25 03:01:16.303550 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-25 03:01:16.303556 | orchestrator | Wednesday 25 March 2026 03:01:02 +0000 (0:00:01.259) 0:06:41.480 ******* 2026-03-25 03:01:16.303562 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 03:01:16.303569 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 03:01:16.303587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 03:01:16.303593 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:01:16.303599 | orchestrator | 2026-03-25 03:01:16.303605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-25 03:01:16.303611 | orchestrator | Wednesday 25 March 2026 03:01:02 +0000 (0:00:00.713) 0:06:42.194 ******* 2026-03-25 03:01:16.303617 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:01:16.303623 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:01:16.303629 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:01:16.303636 | orchestrator | 2026-03-25 03:01:16.303641 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-25 03:01:16.303648 | orchestrator | 2026-03-25 03:01:16.303654 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 03:01:16.303661 | orchestrator | Wednesday 25 March 2026 03:01:03 +0000 (0:00:00.604) 0:06:42.798 ******* 2026-03-25 03:01:16.303672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:01:16.303684 | orchestrator | 2026-03-25 03:01:16.303695 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 03:01:16.303706 | orchestrator | Wednesday 25 March 2026 03:01:04 +0000 (0:00:00.989) 0:06:43.788 ******* 2026-03-25 03:01:16.303716 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:01:16.303783 | orchestrator | 2026-03-25 03:01:16.303794 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 03:01:16.303802 | orchestrator | Wednesday 25 March 2026 03:01:05 +0000 (0:00:00.859) 0:06:44.647 ******* 2026-03-25 03:01:16.303809 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:01:16.303816 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:01:16.303823 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:01:16.303830 | orchestrator | 2026-03-25 03:01:16.303837 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 03:01:16.303844 | orchestrator | Wednesday 25 March 2026 03:01:05 +0000 (0:00:00.424) 0:06:45.071 ******* 2026-03-25 03:01:16.303851 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.303858 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.303865 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.303871 | orchestrator | 2026-03-25 03:01:16.303879 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 03:01:16.303886 | orchestrator | Wednesday 25 March 2026 03:01:06 +0000 (0:00:00.724) 0:06:45.796 ******* 2026-03-25 03:01:16.303893 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.303900 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.303913 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.303921 | orchestrator | 2026-03-25 03:01:16.303928 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 03:01:16.303935 | orchestrator | Wednesday 25 March 2026 03:01:07 +0000 (0:00:00.747) 0:06:46.543 ******* 2026-03-25 03:01:16.303942 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.303949 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.303955 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.303962 | orchestrator | 2026-03-25 03:01:16.303970 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 03:01:16.303977 | orchestrator | Wednesday 25 March 2026 03:01:08 +0000 (0:00:01.029) 0:06:47.573 ******* 2026-03-25 03:01:16.303984 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:01:16.303991 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:01:16.303999 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:01:16.304005 | orchestrator | 2026-03-25 03:01:16.304025 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 03:01:16.304032 | orchestrator | Wednesday 25 March 2026 03:01:08 +0000 (0:00:00.387) 0:06:47.960 ******* 2026-03-25 03:01:16.304038 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:01:16.304044 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:01:16.304050 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:01:16.304056 | orchestrator | 2026-03-25 03:01:16.304062 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 03:01:16.304068 | orchestrator | Wednesday 25 March 2026 03:01:08 +0000 (0:00:00.359) 0:06:48.320 ******* 2026-03-25 03:01:16.304074 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:01:16.304080 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:01:16.304086 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:01:16.304092 | orchestrator | 2026-03-25 03:01:16.304100 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 03:01:16.304110 | orchestrator | Wednesday 25 March 2026 03:01:09 +0000 (0:00:00.375) 0:06:48.696 ******* 2026-03-25 03:01:16.304121 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.304132 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.304142 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.304152 | orchestrator | 2026-03-25 03:01:16.304163 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 03:01:16.304171 | orchestrator | Wednesday 25 March 2026 03:01:10 +0000 (0:00:01.025) 0:06:49.722 ******* 2026-03-25 03:01:16.304177 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.304183 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.304189 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.304195 | orchestrator | 2026-03-25 03:01:16.304202 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 03:01:16.304208 | orchestrator | Wednesday 25 March 2026 03:01:10 +0000 (0:00:00.727) 0:06:50.450 ******* 2026-03-25 03:01:16.304214 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:01:16.304220 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:01:16.304226 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:01:16.304232 | orchestrator | 2026-03-25 03:01:16.304238 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 03:01:16.304244 | orchestrator | Wednesday 25 March 2026 03:01:11 +0000 (0:00:00.370) 0:06:50.820 ******* 2026-03-25 03:01:16.304250 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:01:16.304256 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:01:16.304262 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:01:16.304268 | orchestrator | 2026-03-25 03:01:16.304274 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 03:01:16.304284 | orchestrator | Wednesday 25 March 2026 03:01:11 +0000 (0:00:00.366) 0:06:51.187 ******* 2026-03-25 03:01:16.304290 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.304296 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.304302 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.304308 | orchestrator | 2026-03-25 03:01:16.304320 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 03:01:16.304326 | orchestrator | Wednesday 25 March 2026 03:01:12 +0000 (0:00:00.693) 0:06:51.880 ******* 2026-03-25 03:01:16.304332 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.304338 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.304344 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.304350 | orchestrator | 2026-03-25 03:01:16.304356 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 03:01:16.304363 | orchestrator | Wednesday 25 March 2026 03:01:12 +0000 (0:00:00.390) 0:06:52.271 ******* 2026-03-25 03:01:16.304369 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.304375 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.304381 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.304387 | orchestrator | 2026-03-25 03:01:16.304393 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 03:01:16.304399 | orchestrator | Wednesday 25 March 2026 03:01:13 +0000 (0:00:00.383) 0:06:52.655 ******* 2026-03-25 03:01:16.304405 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:01:16.304411 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:01:16.304417 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:01:16.304423 | orchestrator | 2026-03-25 03:01:16.304429 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 03:01:16.304435 | orchestrator | Wednesday 25 March 2026 03:01:13 +0000 (0:00:00.331) 0:06:52.987 ******* 2026-03-25 03:01:16.304441 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:01:16.304447 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:01:16.304453 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:01:16.304459 | orchestrator | 2026-03-25 03:01:16.304465 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 03:01:16.304472 | orchestrator | Wednesday 25 March 2026 03:01:14 +0000 (0:00:00.699) 0:06:53.686 ******* 2026-03-25 03:01:16.304478 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:01:16.304484 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:01:16.304490 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:01:16.304496 | orchestrator | 2026-03-25 03:01:16.304502 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 03:01:16.304508 | orchestrator | Wednesday 25 March 2026 03:01:14 +0000 (0:00:00.369) 0:06:54.056 ******* 2026-03-25 03:01:16.304514 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.304520 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.304526 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.304532 | orchestrator | 2026-03-25 03:01:16.304538 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 03:01:16.304544 | orchestrator | Wednesday 25 March 2026 03:01:14 +0000 (0:00:00.362) 0:06:54.418 ******* 2026-03-25 03:01:16.304550 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.304556 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.304562 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.304568 | orchestrator | 2026-03-25 03:01:16.304574 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-25 03:01:16.304580 | orchestrator | Wednesday 25 March 2026 03:01:15 +0000 (0:00:00.943) 0:06:55.362 ******* 2026-03-25 03:01:16.304586 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:01:16.304592 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:01:16.304598 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:01:16.304604 | orchestrator | 2026-03-25 03:01:16.304611 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-25 03:01:16.304621 | orchestrator | Wednesday 25 March 2026 03:01:16 +0000 (0:00:00.386) 0:06:55.749 ******* 2026-03-25 03:02:19.814004 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 03:02:19.814195 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 03:02:19.814207 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 03:02:19.814242 | orchestrator | 2026-03-25 03:02:19.814253 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-25 03:02:19.814263 | orchestrator | Wednesday 25 March 2026 03:01:17 +0000 (0:00:00.769) 0:06:56.519 ******* 2026-03-25 03:02:19.814273 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:02:19.814282 | orchestrator | 2026-03-25 03:02:19.814291 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-25 03:02:19.814300 | orchestrator | Wednesday 25 March 2026 03:01:17 +0000 (0:00:00.628) 0:06:57.147 ******* 2026-03-25 03:02:19.814309 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:19.814319 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:19.814328 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:19.814336 | orchestrator | 2026-03-25 03:02:19.814345 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-25 03:02:19.814354 | orchestrator | Wednesday 25 March 2026 03:01:18 +0000 (0:00:00.700) 0:06:57.848 ******* 2026-03-25 03:02:19.814363 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:19.814371 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:19.814380 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:19.814388 | orchestrator | 2026-03-25 03:02:19.814397 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-25 03:02:19.814406 | orchestrator | Wednesday 25 March 2026 03:01:18 +0000 (0:00:00.395) 0:06:58.244 ******* 2026-03-25 03:02:19.814415 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:02:19.814424 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:02:19.814433 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:02:19.814442 | orchestrator | 2026-03-25 03:02:19.814451 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-25 03:02:19.814459 | orchestrator | Wednesday 25 March 2026 03:01:19 +0000 (0:00:00.678) 0:06:58.923 ******* 2026-03-25 03:02:19.814468 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:02:19.814477 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:02:19.814485 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:02:19.814494 | orchestrator | 2026-03-25 03:02:19.814516 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-25 03:02:19.814525 | orchestrator | Wednesday 25 March 2026 03:01:20 +0000 (0:00:00.684) 0:06:59.607 ******* 2026-03-25 03:02:19.814534 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-25 03:02:19.814544 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-25 03:02:19.814553 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-25 03:02:19.814561 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-25 03:02:19.814571 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-25 03:02:19.814580 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-25 03:02:19.814588 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-25 03:02:19.814597 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-25 03:02:19.814606 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-25 03:02:19.814614 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-25 03:02:19.814623 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-25 03:02:19.814631 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-25 03:02:19.814640 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-25 03:02:19.814649 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-25 03:02:19.814664 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-25 03:02:19.814673 | orchestrator | 2026-03-25 03:02:19.814682 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-25 03:02:19.814690 | orchestrator | Wednesday 25 March 2026 03:01:24 +0000 (0:00:04.045) 0:07:03.653 ******* 2026-03-25 03:02:19.814699 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:19.814708 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:19.814716 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:19.814725 | orchestrator | 2026-03-25 03:02:19.814734 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-25 03:02:19.814770 | orchestrator | Wednesday 25 March 2026 03:01:24 +0000 (0:00:00.326) 0:07:03.980 ******* 2026-03-25 03:02:19.814786 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:02:19.814802 | orchestrator | 2026-03-25 03:02:19.814813 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-25 03:02:19.814822 | orchestrator | Wednesday 25 March 2026 03:01:25 +0000 (0:00:00.862) 0:07:04.843 ******* 2026-03-25 03:02:19.814831 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-25 03:02:19.814840 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-25 03:02:19.814865 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-25 03:02:19.814875 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-25 03:02:19.814884 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-25 03:02:19.814892 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-25 03:02:19.814901 | orchestrator | 2026-03-25 03:02:19.814910 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-25 03:02:19.814919 | orchestrator | Wednesday 25 March 2026 03:01:26 +0000 (0:00:01.046) 0:07:05.889 ******* 2026-03-25 03:02:19.814927 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:02:19.814936 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 03:02:19.814945 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 03:02:19.814953 | orchestrator | 2026-03-25 03:02:19.814962 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-25 03:02:19.814970 | orchestrator | Wednesday 25 March 2026 03:01:28 +0000 (0:00:01.982) 0:07:07.871 ******* 2026-03-25 03:02:19.814979 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-25 03:02:19.814988 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 03:02:19.814996 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:02:19.815005 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-25 03:02:19.815014 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-25 03:02:19.815023 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:02:19.815031 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-25 03:02:19.815040 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-25 03:02:19.815048 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:02:19.815057 | orchestrator | 2026-03-25 03:02:19.815066 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-25 03:02:19.815074 | orchestrator | Wednesday 25 March 2026 03:01:29 +0000 (0:00:01.192) 0:07:09.064 ******* 2026-03-25 03:02:19.815083 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:02:19.815092 | orchestrator | 2026-03-25 03:02:19.815101 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-25 03:02:19.815109 | orchestrator | Wednesday 25 March 2026 03:01:31 +0000 (0:00:02.023) 0:07:11.087 ******* 2026-03-25 03:02:19.815124 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:02:19.815133 | orchestrator | 2026-03-25 03:02:19.815148 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-25 03:02:19.815157 | orchestrator | Wednesday 25 March 2026 03:01:32 +0000 (0:00:00.941) 0:07:12.028 ******* 2026-03-25 03:02:19.815167 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'}) 2026-03-25 03:02:19.815177 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'}) 2026-03-25 03:02:19.815186 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'}) 2026-03-25 03:02:19.815195 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'}) 2026-03-25 03:02:19.815204 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'}) 2026-03-25 03:02:19.815212 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'}) 2026-03-25 03:02:19.815221 | orchestrator | 2026-03-25 03:02:19.815230 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-25 03:02:19.815239 | orchestrator | Wednesday 25 March 2026 03:02:14 +0000 (0:00:41.676) 0:07:53.705 ******* 2026-03-25 03:02:19.815247 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:19.815256 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:19.815264 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:19.815273 | orchestrator | 2026-03-25 03:02:19.815284 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-25 03:02:19.815298 | orchestrator | Wednesday 25 March 2026 03:02:14 +0000 (0:00:00.355) 0:07:54.060 ******* 2026-03-25 03:02:19.815312 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:02:19.815326 | orchestrator | 2026-03-25 03:02:19.815345 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-25 03:02:19.815365 | orchestrator | Wednesday 25 March 2026 03:02:15 +0000 (0:00:00.983) 0:07:55.044 ******* 2026-03-25 03:02:19.815379 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:02:19.815392 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:02:19.815405 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:02:19.815418 | orchestrator | 2026-03-25 03:02:19.815432 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-25 03:02:19.815445 | orchestrator | Wednesday 25 March 2026 03:02:16 +0000 (0:00:00.713) 0:07:55.758 ******* 2026-03-25 03:02:19.815457 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:02:19.815470 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:02:19.815484 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:02:19.815499 | orchestrator | 2026-03-25 03:02:19.815512 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-25 03:02:19.815527 | orchestrator | Wednesday 25 March 2026 03:02:18 +0000 (0:00:02.592) 0:07:58.351 ******* 2026-03-25 03:02:19.815550 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:02:58.627391 | orchestrator | 2026-03-25 03:02:58.627520 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-25 03:02:58.627540 | orchestrator | Wednesday 25 March 2026 03:02:19 +0000 (0:00:00.906) 0:07:59.257 ******* 2026-03-25 03:02:58.627555 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:02:58.627569 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:02:58.627581 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:02:58.627594 | orchestrator | 2026-03-25 03:02:58.627607 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-25 03:02:58.627619 | orchestrator | Wednesday 25 March 2026 03:02:21 +0000 (0:00:01.252) 0:08:00.510 ******* 2026-03-25 03:02:58.627664 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:02:58.627677 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:02:58.627689 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:02:58.627702 | orchestrator | 2026-03-25 03:02:58.627714 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-25 03:02:58.627726 | orchestrator | Wednesday 25 March 2026 03:02:22 +0000 (0:00:01.159) 0:08:01.670 ******* 2026-03-25 03:02:58.627754 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:02:58.627794 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:02:58.627807 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:02:58.627820 | orchestrator | 2026-03-25 03:02:58.627832 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-25 03:02:58.627845 | orchestrator | Wednesday 25 March 2026 03:02:24 +0000 (0:00:02.028) 0:08:03.698 ******* 2026-03-25 03:02:58.627857 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.627867 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:58.627880 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:58.627891 | orchestrator | 2026-03-25 03:02:58.627905 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-25 03:02:58.627918 | orchestrator | Wednesday 25 March 2026 03:02:24 +0000 (0:00:00.385) 0:08:04.084 ******* 2026-03-25 03:02:58.627931 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.627944 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:58.627957 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:58.627969 | orchestrator | 2026-03-25 03:02:58.627982 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-25 03:02:58.627996 | orchestrator | Wednesday 25 March 2026 03:02:25 +0000 (0:00:00.404) 0:08:04.488 ******* 2026-03-25 03:02:58.628011 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-25 03:02:58.628041 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-25 03:02:58.628055 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-25 03:02:58.628068 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 03:02:58.628081 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-25 03:02:58.628095 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-25 03:02:58.628108 | orchestrator | 2026-03-25 03:02:58.628121 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-25 03:02:58.628134 | orchestrator | Wednesday 25 March 2026 03:02:26 +0000 (0:00:01.036) 0:08:05.524 ******* 2026-03-25 03:02:58.628149 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-25 03:02:58.628163 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-25 03:02:58.628176 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-25 03:02:58.628189 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-25 03:02:58.628201 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-25 03:02:58.628213 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-25 03:02:58.628224 | orchestrator | 2026-03-25 03:02:58.628236 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-25 03:02:58.628247 | orchestrator | Wednesday 25 March 2026 03:02:28 +0000 (0:00:02.536) 0:08:08.060 ******* 2026-03-25 03:02:58.628260 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-25 03:02:58.628271 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-25 03:02:58.628284 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-25 03:02:58.628297 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-25 03:02:58.628309 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-25 03:02:58.628321 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-25 03:02:58.628333 | orchestrator | 2026-03-25 03:02:58.628344 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-25 03:02:58.628356 | orchestrator | Wednesday 25 March 2026 03:02:32 +0000 (0:00:03.443) 0:08:11.504 ******* 2026-03-25 03:02:58.628368 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.628379 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:58.628405 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:02:58.628418 | orchestrator | 2026-03-25 03:02:58.628429 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-25 03:02:58.628442 | orchestrator | Wednesday 25 March 2026 03:02:35 +0000 (0:00:03.006) 0:08:14.511 ******* 2026-03-25 03:02:58.628454 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.628466 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:58.628477 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-25 03:02:58.628491 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:02:58.628503 | orchestrator | 2026-03-25 03:02:58.628515 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-25 03:02:58.628528 | orchestrator | Wednesday 25 March 2026 03:02:47 +0000 (0:00:12.410) 0:08:26.921 ******* 2026-03-25 03:02:58.628539 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.628551 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:58.628563 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:58.628575 | orchestrator | 2026-03-25 03:02:58.628587 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-25 03:02:58.628599 | orchestrator | Wednesday 25 March 2026 03:02:48 +0000 (0:00:01.319) 0:08:28.241 ******* 2026-03-25 03:02:58.628611 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.628623 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:58.628635 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:58.628647 | orchestrator | 2026-03-25 03:02:58.628659 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-25 03:02:58.628694 | orchestrator | Wednesday 25 March 2026 03:02:49 +0000 (0:00:00.374) 0:08:28.615 ******* 2026-03-25 03:02:58.628707 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:02:58.628719 | orchestrator | 2026-03-25 03:02:58.628732 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-25 03:02:58.628743 | orchestrator | Wednesday 25 March 2026 03:02:50 +0000 (0:00:00.970) 0:08:29.585 ******* 2026-03-25 03:02:58.628755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 03:02:58.628841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 03:02:58.628854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 03:02:58.628866 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.628878 | orchestrator | 2026-03-25 03:02:58.628892 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-25 03:02:58.628903 | orchestrator | Wednesday 25 March 2026 03:02:50 +0000 (0:00:00.492) 0:08:30.078 ******* 2026-03-25 03:02:58.628916 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.628928 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:58.628939 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:58.628951 | orchestrator | 2026-03-25 03:02:58.628962 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-25 03:02:58.628974 | orchestrator | Wednesday 25 March 2026 03:02:51 +0000 (0:00:00.393) 0:08:30.472 ******* 2026-03-25 03:02:58.628986 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.628997 | orchestrator | 2026-03-25 03:02:58.629009 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-25 03:02:58.629021 | orchestrator | Wednesday 25 March 2026 03:02:51 +0000 (0:00:00.258) 0:08:30.731 ******* 2026-03-25 03:02:58.629033 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629044 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:58.629056 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:58.629069 | orchestrator | 2026-03-25 03:02:58.629081 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-25 03:02:58.629093 | orchestrator | Wednesday 25 March 2026 03:02:51 +0000 (0:00:00.640) 0:08:31.371 ******* 2026-03-25 03:02:58.629116 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629127 | orchestrator | 2026-03-25 03:02:58.629148 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-25 03:02:58.629159 | orchestrator | Wednesday 25 March 2026 03:02:52 +0000 (0:00:00.253) 0:08:31.625 ******* 2026-03-25 03:02:58.629171 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629183 | orchestrator | 2026-03-25 03:02:58.629194 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-25 03:02:58.629206 | orchestrator | Wednesday 25 March 2026 03:02:52 +0000 (0:00:00.250) 0:08:31.875 ******* 2026-03-25 03:02:58.629218 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629229 | orchestrator | 2026-03-25 03:02:58.629241 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-25 03:02:58.629252 | orchestrator | Wednesday 25 March 2026 03:02:52 +0000 (0:00:00.132) 0:08:32.007 ******* 2026-03-25 03:02:58.629265 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629277 | orchestrator | 2026-03-25 03:02:58.629288 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-25 03:02:58.629300 | orchestrator | Wednesday 25 March 2026 03:02:52 +0000 (0:00:00.265) 0:08:32.273 ******* 2026-03-25 03:02:58.629312 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629325 | orchestrator | 2026-03-25 03:02:58.629337 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-25 03:02:58.629350 | orchestrator | Wednesday 25 March 2026 03:02:53 +0000 (0:00:00.263) 0:08:32.536 ******* 2026-03-25 03:02:58.629361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 03:02:58.629373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 03:02:58.629385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 03:02:58.629397 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629409 | orchestrator | 2026-03-25 03:02:58.629421 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-25 03:02:58.629432 | orchestrator | Wednesday 25 March 2026 03:02:53 +0000 (0:00:00.475) 0:08:33.012 ******* 2026-03-25 03:02:58.629445 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629456 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:02:58.629467 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:02:58.629479 | orchestrator | 2026-03-25 03:02:58.629490 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-25 03:02:58.629502 | orchestrator | Wednesday 25 March 2026 03:02:53 +0000 (0:00:00.404) 0:08:33.416 ******* 2026-03-25 03:02:58.629513 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629525 | orchestrator | 2026-03-25 03:02:58.629537 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-25 03:02:58.629549 | orchestrator | Wednesday 25 March 2026 03:02:54 +0000 (0:00:00.263) 0:08:33.680 ******* 2026-03-25 03:02:58.629561 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:02:58.629574 | orchestrator | 2026-03-25 03:02:58.629587 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-25 03:02:58.629599 | orchestrator | 2026-03-25 03:02:58.629612 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 03:02:58.629624 | orchestrator | Wednesday 25 March 2026 03:02:55 +0000 (0:00:01.452) 0:08:35.132 ******* 2026-03-25 03:02:58.629637 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:02:58.629651 | orchestrator | 2026-03-25 03:02:58.629664 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 03:02:58.629676 | orchestrator | Wednesday 25 March 2026 03:02:57 +0000 (0:00:01.468) 0:08:36.601 ******* 2026-03-25 03:02:58.629702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:03:27.094310 | orchestrator | 2026-03-25 03:03:27.094397 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 03:03:27.094410 | orchestrator | Wednesday 25 March 2026 03:02:58 +0000 (0:00:01.467) 0:08:38.068 ******* 2026-03-25 03:03:27.094416 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:27.094424 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:27.094430 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:27.094436 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:27.094443 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:27.094450 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:27.094456 | orchestrator | 2026-03-25 03:03:27.094463 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 03:03:27.094469 | orchestrator | Wednesday 25 March 2026 03:03:00 +0000 (0:00:01.506) 0:08:39.575 ******* 2026-03-25 03:03:27.094476 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.094481 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.094487 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.094493 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.094499 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.094505 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.094511 | orchestrator | 2026-03-25 03:03:27.094517 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 03:03:27.094524 | orchestrator | Wednesday 25 March 2026 03:03:00 +0000 (0:00:00.774) 0:08:40.349 ******* 2026-03-25 03:03:27.094531 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.094536 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.094542 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.094548 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.094554 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.094560 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.094567 | orchestrator | 2026-03-25 03:03:27.094573 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 03:03:27.094580 | orchestrator | Wednesday 25 March 2026 03:03:01 +0000 (0:00:01.030) 0:08:41.379 ******* 2026-03-25 03:03:27.094586 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.094593 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.094599 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.094605 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.094611 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.094616 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.094623 | orchestrator | 2026-03-25 03:03:27.094645 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 03:03:27.094652 | orchestrator | Wednesday 25 March 2026 03:03:02 +0000 (0:00:00.790) 0:08:42.170 ******* 2026-03-25 03:03:27.094659 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:27.094665 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:27.094672 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:27.094678 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:27.094684 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:27.094690 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:27.094696 | orchestrator | 2026-03-25 03:03:27.094700 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 03:03:27.094704 | orchestrator | Wednesday 25 March 2026 03:03:04 +0000 (0:00:01.382) 0:08:43.553 ******* 2026-03-25 03:03:27.094708 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:27.094732 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:27.094735 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:27.094739 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.094743 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.094748 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.094753 | orchestrator | 2026-03-25 03:03:27.094760 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 03:03:27.094787 | orchestrator | Wednesday 25 March 2026 03:03:04 +0000 (0:00:00.747) 0:08:44.301 ******* 2026-03-25 03:03:27.094795 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:27.094825 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:27.094831 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:27.094837 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.094842 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.094848 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.094853 | orchestrator | 2026-03-25 03:03:27.094859 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 03:03:27.094865 | orchestrator | Wednesday 25 March 2026 03:03:05 +0000 (0:00:00.982) 0:08:45.283 ******* 2026-03-25 03:03:27.094872 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.094878 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.094893 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.094899 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:27.094905 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:27.094912 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:27.094918 | orchestrator | 2026-03-25 03:03:27.094925 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 03:03:27.094939 | orchestrator | Wednesday 25 March 2026 03:03:06 +0000 (0:00:01.169) 0:08:46.453 ******* 2026-03-25 03:03:27.094945 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.094950 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.094954 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.094959 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:27.094963 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:27.094968 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:27.094974 | orchestrator | 2026-03-25 03:03:27.094980 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 03:03:27.094987 | orchestrator | Wednesday 25 March 2026 03:03:08 +0000 (0:00:01.567) 0:08:48.020 ******* 2026-03-25 03:03:27.094993 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:27.095000 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:27.095006 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:27.095012 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.095018 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.095024 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.095030 | orchestrator | 2026-03-25 03:03:27.095036 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 03:03:27.095042 | orchestrator | Wednesday 25 March 2026 03:03:09 +0000 (0:00:00.707) 0:08:48.728 ******* 2026-03-25 03:03:27.095048 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:27.095055 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:27.095062 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:27.095068 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:27.095075 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:27.095100 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:27.095107 | orchestrator | 2026-03-25 03:03:27.095112 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 03:03:27.095116 | orchestrator | Wednesday 25 March 2026 03:03:10 +0000 (0:00:00.987) 0:08:49.716 ******* 2026-03-25 03:03:27.095120 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.095125 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.095129 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.095134 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.095138 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.095143 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.095147 | orchestrator | 2026-03-25 03:03:27.095152 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 03:03:27.095158 | orchestrator | Wednesday 25 March 2026 03:03:10 +0000 (0:00:00.686) 0:08:50.402 ******* 2026-03-25 03:03:27.095164 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.095170 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.095180 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.095187 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.095193 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.095210 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.095216 | orchestrator | 2026-03-25 03:03:27.095222 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 03:03:27.095228 | orchestrator | Wednesday 25 March 2026 03:03:11 +0000 (0:00:00.980) 0:08:51.383 ******* 2026-03-25 03:03:27.095234 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.095240 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.095246 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.095253 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.095259 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.095264 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.095269 | orchestrator | 2026-03-25 03:03:27.095275 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 03:03:27.095281 | orchestrator | Wednesday 25 March 2026 03:03:12 +0000 (0:00:00.727) 0:08:52.111 ******* 2026-03-25 03:03:27.095287 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:27.095292 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:27.095297 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:27.095302 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.095307 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.095314 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.095319 | orchestrator | 2026-03-25 03:03:27.095325 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 03:03:27.095331 | orchestrator | Wednesday 25 March 2026 03:03:13 +0000 (0:00:00.954) 0:08:53.065 ******* 2026-03-25 03:03:27.095336 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:27.095342 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:27.095348 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:27.095354 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:03:27.095360 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:03:27.095366 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:03:27.095372 | orchestrator | 2026-03-25 03:03:27.095378 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 03:03:27.095383 | orchestrator | Wednesday 25 March 2026 03:03:14 +0000 (0:00:00.706) 0:08:53.772 ******* 2026-03-25 03:03:27.095389 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:27.095395 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:27.095401 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:27.095407 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:27.095413 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:27.095418 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:27.095425 | orchestrator | 2026-03-25 03:03:27.095431 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 03:03:27.095438 | orchestrator | Wednesday 25 March 2026 03:03:15 +0000 (0:00:01.014) 0:08:54.786 ******* 2026-03-25 03:03:27.095443 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.095449 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.095492 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.095499 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:27.095504 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:27.095510 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:27.095515 | orchestrator | 2026-03-25 03:03:27.095522 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 03:03:27.095527 | orchestrator | Wednesday 25 March 2026 03:03:16 +0000 (0:00:00.734) 0:08:55.520 ******* 2026-03-25 03:03:27.095533 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:27.095539 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:27.095545 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:27.095551 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:27.095558 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:27.095563 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:27.095569 | orchestrator | 2026-03-25 03:03:27.095575 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-25 03:03:27.095580 | orchestrator | Wednesday 25 March 2026 03:03:17 +0000 (0:00:01.537) 0:08:57.058 ******* 2026-03-25 03:03:27.095594 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:03:27.095601 | orchestrator | 2026-03-25 03:03:27.095607 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-25 03:03:27.095613 | orchestrator | Wednesday 25 March 2026 03:03:21 +0000 (0:00:03.899) 0:09:00.957 ******* 2026-03-25 03:03:27.095618 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:03:27.095624 | orchestrator | 2026-03-25 03:03:27.095630 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-25 03:03:27.095636 | orchestrator | Wednesday 25 March 2026 03:03:23 +0000 (0:00:02.397) 0:09:03.355 ******* 2026-03-25 03:03:27.095642 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:03:27.095650 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:03:27.095656 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:03:27.095662 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:27.095668 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:03:27.095674 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:03:27.095680 | orchestrator | 2026-03-25 03:03:27.095685 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-25 03:03:27.095691 | orchestrator | Wednesday 25 March 2026 03:03:25 +0000 (0:00:01.891) 0:09:05.246 ******* 2026-03-25 03:03:27.095697 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:03:27.095703 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:03:27.095718 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:03:51.736472 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:03:51.736585 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:03:51.736601 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:03:51.736611 | orchestrator | 2026-03-25 03:03:51.736621 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-25 03:03:51.736633 | orchestrator | Wednesday 25 March 2026 03:03:27 +0000 (0:00:01.290) 0:09:06.536 ******* 2026-03-25 03:03:51.736643 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:03:51.736653 | orchestrator | 2026-03-25 03:03:51.736662 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-25 03:03:51.736671 | orchestrator | Wednesday 25 March 2026 03:03:28 +0000 (0:00:01.485) 0:09:08.022 ******* 2026-03-25 03:03:51.736680 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:03:51.736689 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:03:51.736697 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:03:51.736706 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:03:51.736714 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:03:51.736723 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:03:51.736731 | orchestrator | 2026-03-25 03:03:51.736740 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-25 03:03:51.736748 | orchestrator | Wednesday 25 March 2026 03:03:30 +0000 (0:00:01.623) 0:09:09.646 ******* 2026-03-25 03:03:51.736757 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:03:51.736765 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:03:51.736824 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:03:51.736836 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:03:51.736844 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:03:51.736853 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:03:51.736861 | orchestrator | 2026-03-25 03:03:51.736899 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-25 03:03:51.736953 | orchestrator | Wednesday 25 March 2026 03:03:33 +0000 (0:00:03.796) 0:09:13.442 ******* 2026-03-25 03:03:51.736981 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:03:51.736990 | orchestrator | 2026-03-25 03:03:51.736999 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-25 03:03:51.737031 | orchestrator | Wednesday 25 March 2026 03:03:35 +0000 (0:00:01.472) 0:09:14.914 ******* 2026-03-25 03:03:51.737040 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.737049 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.737058 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.737066 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:51.737075 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:51.737083 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:51.737092 | orchestrator | 2026-03-25 03:03:51.737100 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-25 03:03:51.737109 | orchestrator | Wednesday 25 March 2026 03:03:36 +0000 (0:00:00.721) 0:09:15.635 ******* 2026-03-25 03:03:51.737117 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:03:51.737125 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:03:51.737134 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:03:51.737142 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:03:51.737150 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:03:51.737159 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:03:51.737168 | orchestrator | 2026-03-25 03:03:51.737176 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-25 03:03:51.737185 | orchestrator | Wednesday 25 March 2026 03:03:38 +0000 (0:00:02.769) 0:09:18.404 ******* 2026-03-25 03:03:51.737193 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.737201 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.737210 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.737219 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:03:51.737227 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:03:51.737236 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:03:51.737245 | orchestrator | 2026-03-25 03:03:51.737254 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-25 03:03:51.737263 | orchestrator | 2026-03-25 03:03:51.737273 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 03:03:51.737282 | orchestrator | Wednesday 25 March 2026 03:03:39 +0000 (0:00:01.006) 0:09:19.410 ******* 2026-03-25 03:03:51.737291 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:03:51.737301 | orchestrator | 2026-03-25 03:03:51.737309 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 03:03:51.737317 | orchestrator | Wednesday 25 March 2026 03:03:40 +0000 (0:00:00.914) 0:09:20.325 ******* 2026-03-25 03:03:51.737326 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:03:51.737336 | orchestrator | 2026-03-25 03:03:51.737345 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 03:03:51.737355 | orchestrator | Wednesday 25 March 2026 03:03:41 +0000 (0:00:00.621) 0:09:20.946 ******* 2026-03-25 03:03:51.737364 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:51.737373 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:51.737383 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:51.737392 | orchestrator | 2026-03-25 03:03:51.737401 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 03:03:51.737412 | orchestrator | Wednesday 25 March 2026 03:03:42 +0000 (0:00:00.670) 0:09:21.617 ******* 2026-03-25 03:03:51.737418 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.737423 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.737429 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.737434 | orchestrator | 2026-03-25 03:03:51.737439 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 03:03:51.737445 | orchestrator | Wednesday 25 March 2026 03:03:42 +0000 (0:00:00.742) 0:09:22.360 ******* 2026-03-25 03:03:51.737450 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.737456 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.737477 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.737483 | orchestrator | 2026-03-25 03:03:51.737491 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 03:03:51.737513 | orchestrator | Wednesday 25 March 2026 03:03:43 +0000 (0:00:00.755) 0:09:23.115 ******* 2026-03-25 03:03:51.737522 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.737531 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.737540 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.737548 | orchestrator | 2026-03-25 03:03:51.737556 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 03:03:51.737565 | orchestrator | Wednesday 25 March 2026 03:03:44 +0000 (0:00:01.027) 0:09:24.143 ******* 2026-03-25 03:03:51.737574 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:51.737582 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:51.737590 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:51.737599 | orchestrator | 2026-03-25 03:03:51.737607 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 03:03:51.737615 | orchestrator | Wednesday 25 March 2026 03:03:45 +0000 (0:00:00.403) 0:09:24.547 ******* 2026-03-25 03:03:51.737623 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:51.737631 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:51.737639 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:51.737646 | orchestrator | 2026-03-25 03:03:51.737654 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 03:03:51.737663 | orchestrator | Wednesday 25 March 2026 03:03:45 +0000 (0:00:00.368) 0:09:24.915 ******* 2026-03-25 03:03:51.737671 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:51.737679 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:51.737687 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:51.737694 | orchestrator | 2026-03-25 03:03:51.737703 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 03:03:51.737711 | orchestrator | Wednesday 25 March 2026 03:03:45 +0000 (0:00:00.360) 0:09:25.276 ******* 2026-03-25 03:03:51.737719 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.737727 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.737735 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.737743 | orchestrator | 2026-03-25 03:03:51.737751 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 03:03:51.737767 | orchestrator | Wednesday 25 March 2026 03:03:46 +0000 (0:00:01.087) 0:09:26.363 ******* 2026-03-25 03:03:51.737820 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.737829 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.737837 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.737845 | orchestrator | 2026-03-25 03:03:51.737853 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 03:03:51.737859 | orchestrator | Wednesday 25 March 2026 03:03:47 +0000 (0:00:00.755) 0:09:27.119 ******* 2026-03-25 03:03:51.737863 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:51.737880 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:51.737885 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:51.737890 | orchestrator | 2026-03-25 03:03:51.737895 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 03:03:51.737900 | orchestrator | Wednesday 25 March 2026 03:03:48 +0000 (0:00:00.358) 0:09:27.478 ******* 2026-03-25 03:03:51.737905 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:51.737951 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:51.737957 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:51.737962 | orchestrator | 2026-03-25 03:03:51.737967 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 03:03:51.737971 | orchestrator | Wednesday 25 March 2026 03:03:48 +0000 (0:00:00.348) 0:09:27.827 ******* 2026-03-25 03:03:51.737976 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.737981 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.737986 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.737990 | orchestrator | 2026-03-25 03:03:51.737995 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 03:03:51.738000 | orchestrator | Wednesday 25 March 2026 03:03:49 +0000 (0:00:00.729) 0:09:28.556 ******* 2026-03-25 03:03:51.738012 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.738068 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.738073 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.738078 | orchestrator | 2026-03-25 03:03:51.738083 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 03:03:51.738088 | orchestrator | Wednesday 25 March 2026 03:03:49 +0000 (0:00:00.393) 0:09:28.950 ******* 2026-03-25 03:03:51.738093 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.738097 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.738102 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.738107 | orchestrator | 2026-03-25 03:03:51.738112 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 03:03:51.738117 | orchestrator | Wednesday 25 March 2026 03:03:49 +0000 (0:00:00.379) 0:09:29.330 ******* 2026-03-25 03:03:51.738121 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:51.738126 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:51.738131 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:51.738135 | orchestrator | 2026-03-25 03:03:51.738140 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 03:03:51.738145 | orchestrator | Wednesday 25 March 2026 03:03:50 +0000 (0:00:00.353) 0:09:29.684 ******* 2026-03-25 03:03:51.738150 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:51.738155 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:51.738159 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:51.738164 | orchestrator | 2026-03-25 03:03:51.738169 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 03:03:51.738174 | orchestrator | Wednesday 25 March 2026 03:03:50 +0000 (0:00:00.712) 0:09:30.396 ******* 2026-03-25 03:03:51.738178 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:03:51.738183 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:03:51.738188 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:03:51.738192 | orchestrator | 2026-03-25 03:03:51.738197 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 03:03:51.738202 | orchestrator | Wednesday 25 March 2026 03:03:51 +0000 (0:00:00.382) 0:09:30.778 ******* 2026-03-25 03:03:51.738207 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:03:51.738211 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:03:51.738216 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:03:51.738221 | orchestrator | 2026-03-25 03:03:51.738235 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 03:04:30.128146 | orchestrator | Wednesday 25 March 2026 03:03:51 +0000 (0:00:00.401) 0:09:31.180 ******* 2026-03-25 03:04:30.128288 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:30.128314 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:30.128333 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:30.128350 | orchestrator | 2026-03-25 03:04:30.128368 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-25 03:04:30.128385 | orchestrator | Wednesday 25 March 2026 03:03:52 +0000 (0:00:00.966) 0:09:32.147 ******* 2026-03-25 03:04:30.128402 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:30.128419 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:30.128437 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-25 03:04:30.128455 | orchestrator | 2026-03-25 03:04:30.128471 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-25 03:04:30.128487 | orchestrator | Wednesday 25 March 2026 03:03:53 +0000 (0:00:00.455) 0:09:32.603 ******* 2026-03-25 03:04:30.128503 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:04:30.128519 | orchestrator | 2026-03-25 03:04:30.128535 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-25 03:04:30.128553 | orchestrator | Wednesday 25 March 2026 03:03:55 +0000 (0:00:02.108) 0:09:34.711 ******* 2026-03-25 03:04:30.128570 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-25 03:04:30.128622 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:30.128640 | orchestrator | 2026-03-25 03:04:30.128657 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-25 03:04:30.128675 | orchestrator | Wednesday 25 March 2026 03:03:55 +0000 (0:00:00.254) 0:09:34.966 ******* 2026-03-25 03:04:30.128713 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-25 03:04:30.128740 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-25 03:04:30.128758 | orchestrator | 2026-03-25 03:04:30.128776 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-25 03:04:30.128824 | orchestrator | Wednesday 25 March 2026 03:04:02 +0000 (0:00:07.324) 0:09:42.290 ******* 2026-03-25 03:04:30.128841 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:04:30.128856 | orchestrator | 2026-03-25 03:04:30.128872 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-25 03:04:30.128889 | orchestrator | Wednesday 25 March 2026 03:04:06 +0000 (0:00:03.510) 0:09:45.801 ******* 2026-03-25 03:04:30.128905 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:04:30.128923 | orchestrator | 2026-03-25 03:04:30.128939 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-25 03:04:30.128987 | orchestrator | Wednesday 25 March 2026 03:04:07 +0000 (0:00:00.925) 0:09:46.727 ******* 2026-03-25 03:04:30.129005 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-25 03:04:30.129023 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-25 03:04:30.129040 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-25 03:04:30.129056 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-25 03:04:30.129072 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-25 03:04:30.129088 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-25 03:04:30.129104 | orchestrator | 2026-03-25 03:04:30.129120 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-25 03:04:30.129137 | orchestrator | Wednesday 25 March 2026 03:04:08 +0000 (0:00:01.123) 0:09:47.850 ******* 2026-03-25 03:04:30.129153 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:04:30.129168 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 03:04:30.129185 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 03:04:30.129201 | orchestrator | 2026-03-25 03:04:30.129218 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-25 03:04:30.129235 | orchestrator | Wednesday 25 March 2026 03:04:10 +0000 (0:00:02.046) 0:09:49.897 ******* 2026-03-25 03:04:30.129251 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-25 03:04:30.129268 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 03:04:30.129285 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:04:30.129301 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-25 03:04:30.129318 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-25 03:04:30.129334 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:04:30.129351 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-25 03:04:30.129368 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-25 03:04:30.129400 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:04:30.129417 | orchestrator | 2026-03-25 03:04:30.129433 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-25 03:04:30.129474 | orchestrator | Wednesday 25 March 2026 03:04:11 +0000 (0:00:01.192) 0:09:51.089 ******* 2026-03-25 03:04:30.129490 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:04:30.129506 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:04:30.129523 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:04:30.129540 | orchestrator | 2026-03-25 03:04:30.129557 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-25 03:04:30.129569 | orchestrator | Wednesday 25 March 2026 03:04:14 +0000 (0:00:02.994) 0:09:54.083 ******* 2026-03-25 03:04:30.129578 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:30.129588 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:30.129597 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:30.129607 | orchestrator | 2026-03-25 03:04:30.129616 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-25 03:04:30.129625 | orchestrator | Wednesday 25 March 2026 03:04:14 +0000 (0:00:00.372) 0:09:54.456 ******* 2026-03-25 03:04:30.129635 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:04:30.129645 | orchestrator | 2026-03-25 03:04:30.129654 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-25 03:04:30.129664 | orchestrator | Wednesday 25 March 2026 03:04:15 +0000 (0:00:00.954) 0:09:55.411 ******* 2026-03-25 03:04:30.129673 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:04:30.129682 | orchestrator | 2026-03-25 03:04:30.129692 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-25 03:04:30.129701 | orchestrator | Wednesday 25 March 2026 03:04:16 +0000 (0:00:00.706) 0:09:56.117 ******* 2026-03-25 03:04:30.129711 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:04:30.129720 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:04:30.129730 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:04:30.129739 | orchestrator | 2026-03-25 03:04:30.129748 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-25 03:04:30.129766 | orchestrator | Wednesday 25 March 2026 03:04:17 +0000 (0:00:01.272) 0:09:57.389 ******* 2026-03-25 03:04:30.129776 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:04:30.129812 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:04:30.129831 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:04:30.129845 | orchestrator | 2026-03-25 03:04:30.129855 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-25 03:04:30.129863 | orchestrator | Wednesday 25 March 2026 03:04:19 +0000 (0:00:01.457) 0:09:58.847 ******* 2026-03-25 03:04:30.129870 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:04:30.129878 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:04:30.129886 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:04:30.129894 | orchestrator | 2026-03-25 03:04:30.129902 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-25 03:04:30.129909 | orchestrator | Wednesday 25 March 2026 03:04:21 +0000 (0:00:01.797) 0:10:00.645 ******* 2026-03-25 03:04:30.129917 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:04:30.129925 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:04:30.129933 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:04:30.129940 | orchestrator | 2026-03-25 03:04:30.129948 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-25 03:04:30.129956 | orchestrator | Wednesday 25 March 2026 03:04:23 +0000 (0:00:01.904) 0:10:02.550 ******* 2026-03-25 03:04:30.129964 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:30.129972 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:30.129979 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:30.129987 | orchestrator | 2026-03-25 03:04:30.129995 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-25 03:04:30.130010 | orchestrator | Wednesday 25 March 2026 03:04:24 +0000 (0:00:01.659) 0:10:04.209 ******* 2026-03-25 03:04:30.130127 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:04:30.130137 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:04:30.130145 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:04:30.130153 | orchestrator | 2026-03-25 03:04:30.130161 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-25 03:04:30.130169 | orchestrator | Wednesday 25 March 2026 03:04:25 +0000 (0:00:00.787) 0:10:04.996 ******* 2026-03-25 03:04:30.130177 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:04:30.130185 | orchestrator | 2026-03-25 03:04:30.130193 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-25 03:04:30.130201 | orchestrator | Wednesday 25 March 2026 03:04:26 +0000 (0:00:00.917) 0:10:05.914 ******* 2026-03-25 03:04:30.130208 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:30.130216 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:30.130224 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:30.130231 | orchestrator | 2026-03-25 03:04:30.130239 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-25 03:04:30.130247 | orchestrator | Wednesday 25 March 2026 03:04:26 +0000 (0:00:00.397) 0:10:06.311 ******* 2026-03-25 03:04:30.130255 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:04:30.130263 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:04:30.130270 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:04:30.130278 | orchestrator | 2026-03-25 03:04:30.130286 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-25 03:04:30.130294 | orchestrator | Wednesday 25 March 2026 03:04:28 +0000 (0:00:01.256) 0:10:07.568 ******* 2026-03-25 03:04:30.130301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 03:04:30.130309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 03:04:30.130317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 03:04:30.130325 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:30.130333 | orchestrator | 2026-03-25 03:04:30.130341 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-25 03:04:30.130349 | orchestrator | Wednesday 25 March 2026 03:04:29 +0000 (0:00:01.021) 0:10:08.589 ******* 2026-03-25 03:04:30.130357 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:30.130365 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:30.130382 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.711533 | orchestrator | 2026-03-25 03:04:49.711681 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-25 03:04:49.711696 | orchestrator | 2026-03-25 03:04:49.711706 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 03:04:49.711724 | orchestrator | Wednesday 25 March 2026 03:04:30 +0000 (0:00:00.981) 0:10:09.571 ******* 2026-03-25 03:04:49.711744 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:04:49.711758 | orchestrator | 2026-03-25 03:04:49.711771 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 03:04:49.711784 | orchestrator | Wednesday 25 March 2026 03:04:30 +0000 (0:00:00.634) 0:10:10.206 ******* 2026-03-25 03:04:49.711845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:04:49.711859 | orchestrator | 2026-03-25 03:04:49.711873 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 03:04:49.711886 | orchestrator | Wednesday 25 March 2026 03:04:31 +0000 (0:00:00.881) 0:10:11.087 ******* 2026-03-25 03:04:49.711900 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.711915 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.711927 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.711968 | orchestrator | 2026-03-25 03:04:49.711977 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 03:04:49.711985 | orchestrator | Wednesday 25 March 2026 03:04:31 +0000 (0:00:00.367) 0:10:11.455 ******* 2026-03-25 03:04:49.711993 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712002 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712010 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712018 | orchestrator | 2026-03-25 03:04:49.712026 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 03:04:49.712035 | orchestrator | Wednesday 25 March 2026 03:04:32 +0000 (0:00:00.729) 0:10:12.185 ******* 2026-03-25 03:04:49.712045 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712066 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712075 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712084 | orchestrator | 2026-03-25 03:04:49.712092 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 03:04:49.712101 | orchestrator | Wednesday 25 March 2026 03:04:33 +0000 (0:00:01.052) 0:10:13.238 ******* 2026-03-25 03:04:49.712111 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712120 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712128 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712137 | orchestrator | 2026-03-25 03:04:49.712146 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 03:04:49.712156 | orchestrator | Wednesday 25 March 2026 03:04:34 +0000 (0:00:00.771) 0:10:14.010 ******* 2026-03-25 03:04:49.712165 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.712173 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.712180 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.712188 | orchestrator | 2026-03-25 03:04:49.712196 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 03:04:49.712203 | orchestrator | Wednesday 25 March 2026 03:04:34 +0000 (0:00:00.429) 0:10:14.439 ******* 2026-03-25 03:04:49.712212 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.712220 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.712228 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.712235 | orchestrator | 2026-03-25 03:04:49.712243 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 03:04:49.712251 | orchestrator | Wednesday 25 March 2026 03:04:35 +0000 (0:00:00.369) 0:10:14.809 ******* 2026-03-25 03:04:49.712258 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.712266 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.712274 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.712281 | orchestrator | 2026-03-25 03:04:49.712289 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 03:04:49.712297 | orchestrator | Wednesday 25 March 2026 03:04:36 +0000 (0:00:00.734) 0:10:15.543 ******* 2026-03-25 03:04:49.712305 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712313 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712320 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712328 | orchestrator | 2026-03-25 03:04:49.712336 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 03:04:49.712343 | orchestrator | Wednesday 25 March 2026 03:04:36 +0000 (0:00:00.786) 0:10:16.330 ******* 2026-03-25 03:04:49.712351 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712359 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712366 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712374 | orchestrator | 2026-03-25 03:04:49.712382 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 03:04:49.712390 | orchestrator | Wednesday 25 March 2026 03:04:37 +0000 (0:00:00.773) 0:10:17.104 ******* 2026-03-25 03:04:49.712397 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.712405 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.712413 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.712420 | orchestrator | 2026-03-25 03:04:49.712428 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 03:04:49.712442 | orchestrator | Wednesday 25 March 2026 03:04:38 +0000 (0:00:00.363) 0:10:17.467 ******* 2026-03-25 03:04:49.712450 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.712458 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.712466 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.712474 | orchestrator | 2026-03-25 03:04:49.712482 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 03:04:49.712490 | orchestrator | Wednesday 25 March 2026 03:04:38 +0000 (0:00:00.665) 0:10:18.133 ******* 2026-03-25 03:04:49.712497 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712505 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712513 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712521 | orchestrator | 2026-03-25 03:04:49.712529 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 03:04:49.712536 | orchestrator | Wednesday 25 March 2026 03:04:39 +0000 (0:00:00.434) 0:10:18.568 ******* 2026-03-25 03:04:49.712544 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712569 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712578 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712586 | orchestrator | 2026-03-25 03:04:49.712615 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 03:04:49.712624 | orchestrator | Wednesday 25 March 2026 03:04:39 +0000 (0:00:00.434) 0:10:19.002 ******* 2026-03-25 03:04:49.712643 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712660 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712668 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712675 | orchestrator | 2026-03-25 03:04:49.712683 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 03:04:49.712691 | orchestrator | Wednesday 25 March 2026 03:04:39 +0000 (0:00:00.414) 0:10:19.417 ******* 2026-03-25 03:04:49.712699 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.712707 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.712715 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.712722 | orchestrator | 2026-03-25 03:04:49.712731 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 03:04:49.712739 | orchestrator | Wednesday 25 March 2026 03:04:40 +0000 (0:00:00.675) 0:10:20.092 ******* 2026-03-25 03:04:49.712746 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.712754 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.712762 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.712770 | orchestrator | 2026-03-25 03:04:49.712778 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 03:04:49.712786 | orchestrator | Wednesday 25 March 2026 03:04:41 +0000 (0:00:00.375) 0:10:20.467 ******* 2026-03-25 03:04:49.712818 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.712831 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.712844 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.712858 | orchestrator | 2026-03-25 03:04:49.712871 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 03:04:49.712885 | orchestrator | Wednesday 25 March 2026 03:04:41 +0000 (0:00:00.401) 0:10:20.869 ******* 2026-03-25 03:04:49.712893 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712901 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712909 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712917 | orchestrator | 2026-03-25 03:04:49.712930 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 03:04:49.712938 | orchestrator | Wednesday 25 March 2026 03:04:41 +0000 (0:00:00.386) 0:10:21.256 ******* 2026-03-25 03:04:49.712946 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:04:49.712954 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:04:49.712961 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:04:49.712969 | orchestrator | 2026-03-25 03:04:49.712977 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-25 03:04:49.712985 | orchestrator | Wednesday 25 March 2026 03:04:42 +0000 (0:00:00.972) 0:10:22.228 ******* 2026-03-25 03:04:49.713000 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:04:49.713009 | orchestrator | 2026-03-25 03:04:49.713017 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-25 03:04:49.713025 | orchestrator | Wednesday 25 March 2026 03:04:43 +0000 (0:00:00.663) 0:10:22.891 ******* 2026-03-25 03:04:49.713033 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:04:49.713041 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 03:04:49.713050 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 03:04:49.713057 | orchestrator | 2026-03-25 03:04:49.713065 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-25 03:04:49.713073 | orchestrator | Wednesday 25 March 2026 03:04:45 +0000 (0:00:02.449) 0:10:25.341 ******* 2026-03-25 03:04:49.713081 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-25 03:04:49.713089 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 03:04:49.713097 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:04:49.713104 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-25 03:04:49.713112 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-25 03:04:49.713121 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:04:49.713137 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-25 03:04:49.713156 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-25 03:04:49.713168 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:04:49.713181 | orchestrator | 2026-03-25 03:04:49.713192 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-25 03:04:49.713205 | orchestrator | Wednesday 25 March 2026 03:04:47 +0000 (0:00:01.602) 0:10:26.943 ******* 2026-03-25 03:04:49.713217 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:04:49.713230 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:04:49.713243 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:04:49.713256 | orchestrator | 2026-03-25 03:04:49.713269 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-25 03:04:49.713282 | orchestrator | Wednesday 25 March 2026 03:04:47 +0000 (0:00:00.378) 0:10:27.322 ******* 2026-03-25 03:04:49.713296 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:04:49.713305 | orchestrator | 2026-03-25 03:04:49.713313 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-25 03:04:49.713321 | orchestrator | Wednesday 25 March 2026 03:04:48 +0000 (0:00:00.627) 0:10:27.949 ******* 2026-03-25 03:04:49.713329 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 03:04:49.713340 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 03:04:49.713358 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 03:05:39.710369 | orchestrator | 2026-03-25 03:05:39.710463 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-25 03:05:39.710473 | orchestrator | Wednesday 25 March 2026 03:04:49 +0000 (0:00:01.203) 0:10:29.153 ******* 2026-03-25 03:05:39.710478 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:05:39.710484 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-25 03:05:39.710489 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:05:39.710493 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:05:39.710514 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-25 03:05:39.710519 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-25 03:05:39.710523 | orchestrator | 2026-03-25 03:05:39.710527 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-25 03:05:39.710531 | orchestrator | Wednesday 25 March 2026 03:04:53 +0000 (0:00:04.235) 0:10:33.388 ******* 2026-03-25 03:05:39.710535 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:05:39.710540 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 03:05:39.710544 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:05:39.710547 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 03:05:39.710551 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:05:39.710565 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 03:05:39.710569 | orchestrator | 2026-03-25 03:05:39.710573 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-25 03:05:39.710577 | orchestrator | Wednesday 25 March 2026 03:04:56 +0000 (0:00:02.443) 0:10:35.831 ******* 2026-03-25 03:05:39.710581 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-25 03:05:39.710586 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:05:39.710590 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-25 03:05:39.710602 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:05:39.710605 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-25 03:05:39.710609 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:05:39.710613 | orchestrator | 2026-03-25 03:05:39.710617 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-25 03:05:39.710620 | orchestrator | Wednesday 25 March 2026 03:04:57 +0000 (0:00:01.507) 0:10:37.339 ******* 2026-03-25 03:05:39.710624 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-25 03:05:39.710628 | orchestrator | 2026-03-25 03:05:39.710632 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-25 03:05:39.710635 | orchestrator | Wednesday 25 March 2026 03:04:58 +0000 (0:00:00.252) 0:10:37.591 ******* 2026-03-25 03:05:39.710639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710658 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:05:39.710662 | orchestrator | 2026-03-25 03:05:39.710666 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-25 03:05:39.710669 | orchestrator | Wednesday 25 March 2026 03:04:58 +0000 (0:00:00.712) 0:10:38.303 ******* 2026-03-25 03:05:39.710673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 03:05:39.710696 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:05:39.710700 | orchestrator | 2026-03-25 03:05:39.710704 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-25 03:05:39.710708 | orchestrator | Wednesday 25 March 2026 03:04:59 +0000 (0:00:00.716) 0:10:39.020 ******* 2026-03-25 03:05:39.710722 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 03:05:39.710728 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 03:05:39.710732 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 03:05:39.710736 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 03:05:39.710740 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 03:05:39.710743 | orchestrator | 2026-03-25 03:05:39.710747 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-25 03:05:39.710751 | orchestrator | Wednesday 25 March 2026 03:05:28 +0000 (0:00:28.667) 0:11:07.687 ******* 2026-03-25 03:05:39.710755 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:05:39.710758 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:05:39.710762 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:05:39.710766 | orchestrator | 2026-03-25 03:05:39.710770 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-25 03:05:39.710773 | orchestrator | Wednesday 25 March 2026 03:05:28 +0000 (0:00:00.384) 0:11:08.072 ******* 2026-03-25 03:05:39.710777 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:05:39.710781 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:05:39.710785 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:05:39.710788 | orchestrator | 2026-03-25 03:05:39.710795 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-25 03:05:39.710799 | orchestrator | Wednesday 25 March 2026 03:05:29 +0000 (0:00:00.437) 0:11:08.510 ******* 2026-03-25 03:05:39.710802 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:05:39.710860 | orchestrator | 2026-03-25 03:05:39.710864 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-25 03:05:39.710868 | orchestrator | Wednesday 25 March 2026 03:05:30 +0000 (0:00:00.981) 0:11:09.492 ******* 2026-03-25 03:05:39.710872 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:05:39.710875 | orchestrator | 2026-03-25 03:05:39.710879 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-25 03:05:39.710883 | orchestrator | Wednesday 25 March 2026 03:05:30 +0000 (0:00:00.588) 0:11:10.080 ******* 2026-03-25 03:05:39.710886 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:05:39.710890 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:05:39.710894 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:05:39.710898 | orchestrator | 2026-03-25 03:05:39.710902 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-25 03:05:39.710905 | orchestrator | Wednesday 25 March 2026 03:05:32 +0000 (0:00:01.678) 0:11:11.759 ******* 2026-03-25 03:05:39.710913 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:05:39.710916 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:05:39.710920 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:05:39.710925 | orchestrator | 2026-03-25 03:05:39.710929 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-25 03:05:39.710933 | orchestrator | Wednesday 25 March 2026 03:05:33 +0000 (0:00:01.176) 0:11:12.936 ******* 2026-03-25 03:05:39.710938 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:05:39.710942 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:05:39.710946 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:05:39.710950 | orchestrator | 2026-03-25 03:05:39.710954 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-25 03:05:39.710959 | orchestrator | Wednesday 25 March 2026 03:05:35 +0000 (0:00:01.779) 0:11:14.715 ******* 2026-03-25 03:05:39.710963 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 03:05:39.710968 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 03:05:39.710972 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 03:05:39.710976 | orchestrator | 2026-03-25 03:05:39.710980 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-25 03:05:39.710985 | orchestrator | Wednesday 25 March 2026 03:05:38 +0000 (0:00:02.800) 0:11:17.516 ******* 2026-03-25 03:05:39.710989 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:05:39.710993 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:05:39.710997 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:05:39.711002 | orchestrator | 2026-03-25 03:05:39.711006 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-25 03:05:39.711010 | orchestrator | Wednesday 25 March 2026 03:05:38 +0000 (0:00:00.423) 0:11:17.939 ******* 2026-03-25 03:05:39.711015 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:05:39.711019 | orchestrator | 2026-03-25 03:05:39.711023 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-25 03:05:39.711028 | orchestrator | Wednesday 25 March 2026 03:05:39 +0000 (0:00:00.976) 0:11:18.916 ******* 2026-03-25 03:05:39.711036 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:05:42.437164 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:05:42.437268 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:05:42.437283 | orchestrator | 2026-03-25 03:05:42.437296 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-25 03:05:42.437308 | orchestrator | Wednesday 25 March 2026 03:05:39 +0000 (0:00:00.378) 0:11:19.295 ******* 2026-03-25 03:05:42.437319 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:05:42.437330 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:05:42.437342 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:05:42.437352 | orchestrator | 2026-03-25 03:05:42.437362 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-25 03:05:42.437373 | orchestrator | Wednesday 25 March 2026 03:05:40 +0000 (0:00:00.388) 0:11:19.683 ******* 2026-03-25 03:05:42.437383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 03:05:42.437394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 03:05:42.437405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 03:05:42.437415 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:05:42.437426 | orchestrator | 2026-03-25 03:05:42.437437 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-25 03:05:42.437447 | orchestrator | Wednesday 25 March 2026 03:05:41 +0000 (0:00:01.036) 0:11:20.720 ******* 2026-03-25 03:05:42.437458 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:05:42.437468 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:05:42.437505 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:05:42.437517 | orchestrator | 2026-03-25 03:05:42.437528 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:05:42.437540 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-25 03:05:42.437567 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-25 03:05:42.437577 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-25 03:05:42.437588 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-25 03:05:42.437599 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-25 03:05:42.437609 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-25 03:05:42.437620 | orchestrator | 2026-03-25 03:05:42.437630 | orchestrator | 2026-03-25 03:05:42.437641 | orchestrator | 2026-03-25 03:05:42.437651 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:05:42.437661 | orchestrator | Wednesday 25 March 2026 03:05:41 +0000 (0:00:00.587) 0:11:21.308 ******* 2026-03-25 03:05:42.437671 | orchestrator | =============================================================================== 2026-03-25 03:05:42.437682 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 65.39s 2026-03-25 03:05:42.437692 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.68s 2026-03-25 03:05:42.437702 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.67s 2026-03-25 03:05:42.437713 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.20s 2026-03-25 03:05:42.437723 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.84s 2026-03-25 03:05:42.437734 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.02s 2026-03-25 03:05:42.437744 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.41s 2026-03-25 03:05:42.437755 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.11s 2026-03-25 03:05:42.437765 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.93s 2026-03-25 03:05:42.437776 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.32s 2026-03-25 03:05:42.437787 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.41s 2026-03-25 03:05:42.437797 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.38s 2026-03-25 03:05:42.437830 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.10s 2026-03-25 03:05:42.437843 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.24s 2026-03-25 03:05:42.437850 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.05s 2026-03-25 03:05:42.437858 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.90s 2026-03-25 03:05:42.437865 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.80s 2026-03-25 03:05:42.437872 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.51s 2026-03-25 03:05:42.437880 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.44s 2026-03-25 03:05:42.437886 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.43s 2026-03-25 03:05:45.214789 | orchestrator | 2026-03-25 03:05:45 | INFO  | Task c2ae4dc5-f2e6-400e-a88a-2ca315ef9e5e (ceph-pools) was prepared for execution. 2026-03-25 03:05:45.214936 | orchestrator | 2026-03-25 03:05:45 | INFO  | It takes a moment until task c2ae4dc5-f2e6-400e-a88a-2ca315ef9e5e (ceph-pools) has been started and output is visible here. 2026-03-25 03:06:01.701107 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-25 03:06:01.701203 | orchestrator | 2.16.14 2026-03-25 03:06:01.701215 | orchestrator | 2026-03-25 03:06:01.701223 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-25 03:06:01.701232 | orchestrator | 2026-03-25 03:06:01.701239 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 03:06:01.701247 | orchestrator | Wednesday 25 March 2026 03:05:50 +0000 (0:00:00.757) 0:00:00.757 ******* 2026-03-25 03:06:01.701255 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:06:01.701263 | orchestrator | 2026-03-25 03:06:01.701271 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 03:06:01.701278 | orchestrator | Wednesday 25 March 2026 03:05:51 +0000 (0:00:00.759) 0:00:01.517 ******* 2026-03-25 03:06:01.701285 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:01.701293 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:01.701300 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:01.701309 | orchestrator | 2026-03-25 03:06:01.701322 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 03:06:01.701335 | orchestrator | Wednesday 25 March 2026 03:05:52 +0000 (0:00:00.655) 0:00:02.172 ******* 2026-03-25 03:06:01.701347 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:01.701359 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:01.701371 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:01.701384 | orchestrator | 2026-03-25 03:06:01.701397 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 03:06:01.701410 | orchestrator | Wednesday 25 March 2026 03:05:52 +0000 (0:00:00.360) 0:00:02.532 ******* 2026-03-25 03:06:01.701423 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:01.701430 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:01.701437 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:01.701445 | orchestrator | 2026-03-25 03:06:01.701467 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 03:06:01.701474 | orchestrator | Wednesday 25 March 2026 03:05:53 +0000 (0:00:00.933) 0:00:03.466 ******* 2026-03-25 03:06:01.701482 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:01.701489 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:01.701496 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:01.701503 | orchestrator | 2026-03-25 03:06:01.701511 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 03:06:01.701518 | orchestrator | Wednesday 25 March 2026 03:05:53 +0000 (0:00:00.354) 0:00:03.821 ******* 2026-03-25 03:06:01.701525 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:01.701532 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:01.701540 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:01.701547 | orchestrator | 2026-03-25 03:06:01.701554 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 03:06:01.701561 | orchestrator | Wednesday 25 March 2026 03:05:54 +0000 (0:00:00.381) 0:00:04.202 ******* 2026-03-25 03:06:01.701568 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:01.701576 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:01.701583 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:01.701590 | orchestrator | 2026-03-25 03:06:01.701597 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 03:06:01.701605 | orchestrator | Wednesday 25 March 2026 03:05:54 +0000 (0:00:00.373) 0:00:04.575 ******* 2026-03-25 03:06:01.701612 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:01.701620 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:01.701627 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:01.701634 | orchestrator | 2026-03-25 03:06:01.701641 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 03:06:01.701670 | orchestrator | Wednesday 25 March 2026 03:05:55 +0000 (0:00:00.595) 0:00:05.171 ******* 2026-03-25 03:06:01.701678 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:01.701687 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:01.701695 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:01.701704 | orchestrator | 2026-03-25 03:06:01.701712 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 03:06:01.701721 | orchestrator | Wednesday 25 March 2026 03:05:55 +0000 (0:00:00.368) 0:00:05.539 ******* 2026-03-25 03:06:01.701730 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 03:06:01.701738 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 03:06:01.701747 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 03:06:01.701756 | orchestrator | 2026-03-25 03:06:01.701769 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 03:06:01.701787 | orchestrator | Wednesday 25 March 2026 03:05:56 +0000 (0:00:00.849) 0:00:06.389 ******* 2026-03-25 03:06:01.701800 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:01.701838 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:01.701851 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:01.701862 | orchestrator | 2026-03-25 03:06:01.701873 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 03:06:01.701885 | orchestrator | Wednesday 25 March 2026 03:05:56 +0000 (0:00:00.572) 0:00:06.961 ******* 2026-03-25 03:06:01.701898 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 03:06:01.701910 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 03:06:01.701921 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 03:06:01.701932 | orchestrator | 2026-03-25 03:06:01.701944 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 03:06:01.701956 | orchestrator | Wednesday 25 March 2026 03:05:59 +0000 (0:00:02.319) 0:00:09.281 ******* 2026-03-25 03:06:01.701970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 03:06:01.701984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 03:06:01.701996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 03:06:01.702008 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:01.702128 | orchestrator | 2026-03-25 03:06:01.702168 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 03:06:01.702181 | orchestrator | Wednesday 25 March 2026 03:05:59 +0000 (0:00:00.750) 0:00:10.032 ******* 2026-03-25 03:06:01.702195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 03:06:01.702210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 03:06:01.702221 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 03:06:01.702232 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:01.702244 | orchestrator | 2026-03-25 03:06:01.702257 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 03:06:01.702269 | orchestrator | Wednesday 25 March 2026 03:06:01 +0000 (0:00:01.346) 0:00:11.378 ******* 2026-03-25 03:06:01.702292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:01.702323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:01.702337 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:01.702347 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:01.702354 | orchestrator | 2026-03-25 03:06:01.702362 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 03:06:01.702369 | orchestrator | Wednesday 25 March 2026 03:06:01 +0000 (0:00:00.205) 0:00:11.584 ******* 2026-03-25 03:06:01.702378 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '928ffe0e6efa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 03:05:57.831234', 'end': '2026-03-25 03:05:57.877511', 'delta': '0:00:00.046277', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['928ffe0e6efa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 03:06:01.702389 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cb4e3d9a68a8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 03:05:58.409499', 'end': '2026-03-25 03:05:58.468105', 'delta': '0:00:00.058606', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cb4e3d9a68a8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 03:06:01.702407 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '90e526f29e10', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 03:05:58.985841', 'end': '2026-03-25 03:05:59.023535', 'delta': '0:00:00.037694', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90e526f29e10'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 03:06:09.285656 | orchestrator | 2026-03-25 03:06:09.285740 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 03:06:09.285748 | orchestrator | Wednesday 25 March 2026 03:06:01 +0000 (0:00:00.223) 0:00:11.807 ******* 2026-03-25 03:06:09.285768 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:09.285773 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:09.285777 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:09.285780 | orchestrator | 2026-03-25 03:06:09.285785 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 03:06:09.285789 | orchestrator | Wednesday 25 March 2026 03:06:02 +0000 (0:00:00.522) 0:00:12.330 ******* 2026-03-25 03:06:09.285793 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-25 03:06:09.285797 | orchestrator | 2026-03-25 03:06:09.285812 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 03:06:09.285876 | orchestrator | Wednesday 25 March 2026 03:06:03 +0000 (0:00:01.584) 0:00:13.914 ******* 2026-03-25 03:06:09.285880 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.285884 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.285888 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.285891 | orchestrator | 2026-03-25 03:06:09.285895 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 03:06:09.285899 | orchestrator | Wednesday 25 March 2026 03:06:04 +0000 (0:00:00.331) 0:00:14.246 ******* 2026-03-25 03:06:09.285902 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.285906 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.285910 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.285913 | orchestrator | 2026-03-25 03:06:09.285917 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 03:06:09.285921 | orchestrator | Wednesday 25 March 2026 03:06:05 +0000 (0:00:01.041) 0:00:15.287 ******* 2026-03-25 03:06:09.285925 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.285928 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.285932 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.285936 | orchestrator | 2026-03-25 03:06:09.285940 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 03:06:09.285943 | orchestrator | Wednesday 25 March 2026 03:06:05 +0000 (0:00:00.366) 0:00:15.654 ******* 2026-03-25 03:06:09.285947 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:09.285951 | orchestrator | 2026-03-25 03:06:09.285954 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 03:06:09.285958 | orchestrator | Wednesday 25 March 2026 03:06:05 +0000 (0:00:00.129) 0:00:15.784 ******* 2026-03-25 03:06:09.285962 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.285965 | orchestrator | 2026-03-25 03:06:09.285969 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 03:06:09.285973 | orchestrator | Wednesday 25 March 2026 03:06:05 +0000 (0:00:00.260) 0:00:16.044 ******* 2026-03-25 03:06:09.285976 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.285980 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.285984 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.285988 | orchestrator | 2026-03-25 03:06:09.285991 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 03:06:09.285995 | orchestrator | Wednesday 25 March 2026 03:06:06 +0000 (0:00:00.351) 0:00:16.396 ******* 2026-03-25 03:06:09.285999 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.286003 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.286006 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.286010 | orchestrator | 2026-03-25 03:06:09.286052 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 03:06:09.286056 | orchestrator | Wednesday 25 March 2026 03:06:06 +0000 (0:00:00.359) 0:00:16.756 ******* 2026-03-25 03:06:09.286060 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.286064 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.286067 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.286071 | orchestrator | 2026-03-25 03:06:09.286075 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 03:06:09.286079 | orchestrator | Wednesday 25 March 2026 03:06:07 +0000 (0:00:00.622) 0:00:17.379 ******* 2026-03-25 03:06:09.286087 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.286091 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.286095 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.286099 | orchestrator | 2026-03-25 03:06:09.286103 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 03:06:09.286106 | orchestrator | Wednesday 25 March 2026 03:06:07 +0000 (0:00:00.387) 0:00:17.766 ******* 2026-03-25 03:06:09.286110 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.286114 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.286118 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.286121 | orchestrator | 2026-03-25 03:06:09.286125 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 03:06:09.286129 | orchestrator | Wednesday 25 March 2026 03:06:08 +0000 (0:00:00.379) 0:00:18.145 ******* 2026-03-25 03:06:09.286132 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.286136 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.286139 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.286143 | orchestrator | 2026-03-25 03:06:09.286147 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 03:06:09.286151 | orchestrator | Wednesday 25 March 2026 03:06:08 +0000 (0:00:00.639) 0:00:18.785 ******* 2026-03-25 03:06:09.286155 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.286159 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.286162 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.286166 | orchestrator | 2026-03-25 03:06:09.286170 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 03:06:09.286173 | orchestrator | Wednesday 25 March 2026 03:06:09 +0000 (0:00:00.380) 0:00:19.165 ******* 2026-03-25 03:06:09.286192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.286205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.286213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.286220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.286226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.286237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.286244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.286250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.286257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.286271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.405523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.405636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.405650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.405675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.405690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.405699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.405715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.405724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-42-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.405732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.405740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.405747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.405759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.553799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.553924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.553934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.553959 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:09.553972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.553998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.554065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.554081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.554089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.554097 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:09.554104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.554112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.554119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.554132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.861650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.861767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.861810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.861853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.861864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.861874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-25 03:06:09.861917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.861940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.861952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.861963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.861979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-25 03:06:09.861997 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:09.862098 | orchestrator | 2026-03-25 03:06:09.862124 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 03:06:09.862142 | orchestrator | Wednesday 25 March 2026 03:06:09 +0000 (0:00:00.694) 0:00:19.859 ******* 2026-03-25 03:06:09.862175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992811 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992906 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992914 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:09.992941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.217553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.217680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.217702 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.217764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.217782 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.217916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.217940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-42-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.217956 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.217973 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.218008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.218090 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.218122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.367599 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:10.367773 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.367901 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.367961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.367977 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.368001 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.368031 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:10.368044 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.368062 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.368087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.368121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.511656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.511773 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.511812 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.511913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.511923 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.511932 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.511968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.511993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.512004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:10.512021 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:23.373010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-25-01-43-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-25 03:06:23.373188 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:23.373217 | orchestrator | 2026-03-25 03:06:23.373234 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 03:06:23.373250 | orchestrator | Wednesday 25 March 2026 03:06:10 +0000 (0:00:00.764) 0:00:20.624 ******* 2026-03-25 03:06:23.373265 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:23.373278 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:23.373293 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:23.373307 | orchestrator | 2026-03-25 03:06:23.373323 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 03:06:23.373338 | orchestrator | Wednesday 25 March 2026 03:06:11 +0000 (0:00:00.968) 0:00:21.592 ******* 2026-03-25 03:06:23.373350 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:23.373358 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:23.373367 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:23.373375 | orchestrator | 2026-03-25 03:06:23.373384 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 03:06:23.373393 | orchestrator | Wednesday 25 March 2026 03:06:11 +0000 (0:00:00.340) 0:00:21.933 ******* 2026-03-25 03:06:23.373401 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:23.373410 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:23.373421 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:23.373447 | orchestrator | 2026-03-25 03:06:23.373478 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 03:06:23.373494 | orchestrator | Wednesday 25 March 2026 03:06:12 +0000 (0:00:00.657) 0:00:22.590 ******* 2026-03-25 03:06:23.373508 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.373522 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:23.373537 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:23.373551 | orchestrator | 2026-03-25 03:06:23.373564 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 03:06:23.373578 | orchestrator | Wednesday 25 March 2026 03:06:12 +0000 (0:00:00.328) 0:00:22.918 ******* 2026-03-25 03:06:23.373592 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.373606 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:23.373619 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:23.373633 | orchestrator | 2026-03-25 03:06:23.373648 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 03:06:23.373662 | orchestrator | Wednesday 25 March 2026 03:06:13 +0000 (0:00:00.798) 0:00:23.717 ******* 2026-03-25 03:06:23.373676 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.373690 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:23.373704 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:23.373718 | orchestrator | 2026-03-25 03:06:23.373732 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 03:06:23.373747 | orchestrator | Wednesday 25 March 2026 03:06:13 +0000 (0:00:00.376) 0:00:24.093 ******* 2026-03-25 03:06:23.373762 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-25 03:06:23.373778 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-25 03:06:23.373793 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-25 03:06:23.373807 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-25 03:06:23.373891 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-25 03:06:23.373909 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-25 03:06:23.373923 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-25 03:06:23.373955 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-25 03:06:23.373970 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-25 03:06:23.373986 | orchestrator | 2026-03-25 03:06:23.374000 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 03:06:23.374014 | orchestrator | Wednesday 25 March 2026 03:06:15 +0000 (0:00:01.199) 0:00:25.293 ******* 2026-03-25 03:06:23.374147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 03:06:23.374164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 03:06:23.374178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 03:06:23.374193 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.374209 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 03:06:23.374223 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 03:06:23.374237 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 03:06:23.374251 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:23.374267 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-25 03:06:23.374282 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-25 03:06:23.374298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-25 03:06:23.374313 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:23.374328 | orchestrator | 2026-03-25 03:06:23.374345 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 03:06:23.374362 | orchestrator | Wednesday 25 March 2026 03:06:15 +0000 (0:00:00.434) 0:00:25.728 ******* 2026-03-25 03:06:23.374404 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:06:23.374414 | orchestrator | 2026-03-25 03:06:23.374423 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 03:06:23.374434 | orchestrator | Wednesday 25 March 2026 03:06:16 +0000 (0:00:00.868) 0:00:26.597 ******* 2026-03-25 03:06:23.374442 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.374451 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:23.374460 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:23.374469 | orchestrator | 2026-03-25 03:06:23.374477 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 03:06:23.374486 | orchestrator | Wednesday 25 March 2026 03:06:16 +0000 (0:00:00.345) 0:00:26.942 ******* 2026-03-25 03:06:23.374495 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.374503 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:23.374511 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:23.374520 | orchestrator | 2026-03-25 03:06:23.374528 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 03:06:23.374537 | orchestrator | Wednesday 25 March 2026 03:06:17 +0000 (0:00:00.371) 0:00:27.314 ******* 2026-03-25 03:06:23.374545 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.374554 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:06:23.374562 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:06:23.374571 | orchestrator | 2026-03-25 03:06:23.374579 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 03:06:23.374588 | orchestrator | Wednesday 25 March 2026 03:06:17 +0000 (0:00:00.516) 0:00:27.831 ******* 2026-03-25 03:06:23.374596 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:23.374605 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:23.374613 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:23.374622 | orchestrator | 2026-03-25 03:06:23.374631 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 03:06:23.374639 | orchestrator | Wednesday 25 March 2026 03:06:18 +0000 (0:00:00.448) 0:00:28.279 ******* 2026-03-25 03:06:23.374648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 03:06:23.374669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 03:06:23.374688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 03:06:23.374697 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.374705 | orchestrator | 2026-03-25 03:06:23.374714 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 03:06:23.374723 | orchestrator | Wednesday 25 March 2026 03:06:18 +0000 (0:00:00.407) 0:00:28.686 ******* 2026-03-25 03:06:23.374732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 03:06:23.374741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 03:06:23.374749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 03:06:23.374758 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.374767 | orchestrator | 2026-03-25 03:06:23.374775 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 03:06:23.374784 | orchestrator | Wednesday 25 March 2026 03:06:18 +0000 (0:00:00.390) 0:00:29.077 ******* 2026-03-25 03:06:23.374792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 03:06:23.374801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 03:06:23.374810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 03:06:23.374880 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:06:23.374891 | orchestrator | 2026-03-25 03:06:23.374899 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 03:06:23.374908 | orchestrator | Wednesday 25 March 2026 03:06:19 +0000 (0:00:00.375) 0:00:29.453 ******* 2026-03-25 03:06:23.374916 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:06:23.374925 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:06:23.374933 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:06:23.374942 | orchestrator | 2026-03-25 03:06:23.374950 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 03:06:23.374959 | orchestrator | Wednesday 25 March 2026 03:06:19 +0000 (0:00:00.323) 0:00:29.777 ******* 2026-03-25 03:06:23.374967 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 03:06:23.374976 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 03:06:23.374984 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 03:06:23.374993 | orchestrator | 2026-03-25 03:06:23.375001 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 03:06:23.375010 | orchestrator | Wednesday 25 March 2026 03:06:20 +0000 (0:00:00.712) 0:00:30.489 ******* 2026-03-25 03:06:23.375018 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 03:06:23.375027 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 03:06:23.375036 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 03:06:23.375044 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 03:06:23.375053 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 03:06:23.375062 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 03:06:23.375071 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 03:06:23.375079 | orchestrator | 2026-03-25 03:06:23.375088 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 03:06:23.375096 | orchestrator | Wednesday 25 March 2026 03:06:21 +0000 (0:00:01.011) 0:00:31.500 ******* 2026-03-25 03:06:23.375105 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 03:06:23.375120 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 03:08:00.397710 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 03:08:00.397936 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 03:08:00.397991 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 03:08:00.398004 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 03:08:00.398087 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 03:08:00.398101 | orchestrator | 2026-03-25 03:08:00.398113 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-25 03:08:00.398126 | orchestrator | Wednesday 25 March 2026 03:06:23 +0000 (0:00:01.975) 0:00:33.476 ******* 2026-03-25 03:08:00.398137 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:08:00.398149 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:08:00.398160 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-25 03:08:00.398171 | orchestrator | 2026-03-25 03:08:00.398184 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-25 03:08:00.398197 | orchestrator | Wednesday 25 March 2026 03:06:23 +0000 (0:00:00.441) 0:00:33.917 ******* 2026-03-25 03:08:00.398213 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-25 03:08:00.398229 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-25 03:08:00.398264 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-25 03:08:00.398278 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-25 03:08:00.398291 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-25 03:08:00.398303 | orchestrator | 2026-03-25 03:08:00.398316 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-25 03:08:00.398328 | orchestrator | Wednesday 25 March 2026 03:07:09 +0000 (0:00:45.961) 0:01:19.879 ******* 2026-03-25 03:08:00.398341 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398354 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398368 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398380 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398393 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398406 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398419 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-25 03:08:00.398435 | orchestrator | 2026-03-25 03:08:00.398453 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-25 03:08:00.398479 | orchestrator | Wednesday 25 March 2026 03:07:32 +0000 (0:00:22.731) 0:01:42.610 ******* 2026-03-25 03:08:00.398500 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398538 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398556 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398573 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398590 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398609 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398627 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 03:08:00.398646 | orchestrator | 2026-03-25 03:08:00.398665 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-25 03:08:00.398684 | orchestrator | Wednesday 25 March 2026 03:07:43 +0000 (0:00:10.781) 0:01:53.392 ******* 2026-03-25 03:08:00.398704 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398754 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 03:08:00.398774 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 03:08:00.398794 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398812 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 03:08:00.398831 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 03:08:00.398885 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398907 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 03:08:00.398925 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 03:08:00.398942 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.398958 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 03:08:00.398973 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 03:08:00.398989 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.399005 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 03:08:00.399021 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 03:08:00.399038 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 03:08:00.399055 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 03:08:00.399073 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 03:08:00.399091 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-25 03:08:00.399109 | orchestrator | 2026-03-25 03:08:00.399127 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:08:00.399160 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-25 03:08:00.399183 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-25 03:08:00.399202 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-25 03:08:00.399220 | orchestrator | 2026-03-25 03:08:00.399238 | orchestrator | 2026-03-25 03:08:00.399257 | orchestrator | 2026-03-25 03:08:00.399275 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:08:00.399294 | orchestrator | Wednesday 25 March 2026 03:07:59 +0000 (0:00:16.654) 0:02:10.047 ******* 2026-03-25 03:08:00.399310 | orchestrator | =============================================================================== 2026-03-25 03:08:00.399333 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.96s 2026-03-25 03:08:00.399344 | orchestrator | generate keys ---------------------------------------------------------- 22.73s 2026-03-25 03:08:00.399355 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.65s 2026-03-25 03:08:00.399365 | orchestrator | get keys from monitors ------------------------------------------------- 10.78s 2026-03-25 03:08:00.399376 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.32s 2026-03-25 03:08:00.399388 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.98s 2026-03-25 03:08:00.399398 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.58s 2026-03-25 03:08:00.399409 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.35s 2026-03-25 03:08:00.399420 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.20s 2026-03-25 03:08:00.399430 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 1.04s 2026-03-25 03:08:00.399441 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.01s 2026-03-25 03:08:00.399452 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.97s 2026-03-25 03:08:00.399462 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.93s 2026-03-25 03:08:00.399473 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.87s 2026-03-25 03:08:00.399484 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.85s 2026-03-25 03:08:00.399494 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.80s 2026-03-25 03:08:00.399505 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.76s 2026-03-25 03:08:00.399516 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.76s 2026-03-25 03:08:00.399526 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.75s 2026-03-25 03:08:00.399537 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.71s 2026-03-25 03:08:03.189420 | orchestrator | 2026-03-25 03:08:03 | INFO  | Task 4b6ff08c-2553-4411-8853-dff3abda8c95 (copy-ceph-keys) was prepared for execution. 2026-03-25 03:08:03.189569 | orchestrator | 2026-03-25 03:08:03 | INFO  | It takes a moment until task 4b6ff08c-2553-4411-8853-dff3abda8c95 (copy-ceph-keys) has been started and output is visible here. 2026-03-25 03:08:44.490197 | orchestrator | 2026-03-25 03:08:44.490314 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-25 03:08:44.490329 | orchestrator | 2026-03-25 03:08:44.490344 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-25 03:08:44.490356 | orchestrator | Wednesday 25 March 2026 03:08:08 +0000 (0:00:00.186) 0:00:00.186 ******* 2026-03-25 03:08:44.490372 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-25 03:08:44.490389 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.490400 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.490410 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-25 03:08:44.490422 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.490433 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-25 03:08:44.490443 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-25 03:08:44.490453 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-25 03:08:44.490492 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-25 03:08:44.490504 | orchestrator | 2026-03-25 03:08:44.490514 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-25 03:08:44.490525 | orchestrator | Wednesday 25 March 2026 03:08:12 +0000 (0:00:04.557) 0:00:04.744 ******* 2026-03-25 03:08:44.490536 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-25 03:08:44.490563 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.490575 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.490586 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-25 03:08:44.490598 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.490608 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-25 03:08:44.490618 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-25 03:08:44.490630 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-25 03:08:44.490641 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-25 03:08:44.490652 | orchestrator | 2026-03-25 03:08:44.490664 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-25 03:08:44.490674 | orchestrator | Wednesday 25 March 2026 03:08:16 +0000 (0:00:04.165) 0:00:08.910 ******* 2026-03-25 03:08:44.490686 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-25 03:08:44.490698 | orchestrator | 2026-03-25 03:08:44.490710 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-25 03:08:44.490722 | orchestrator | Wednesday 25 March 2026 03:08:18 +0000 (0:00:01.090) 0:00:10.000 ******* 2026-03-25 03:08:44.490733 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-25 03:08:44.490745 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.490756 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.490768 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-25 03:08:44.490779 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.490790 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-25 03:08:44.490801 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-25 03:08:44.490812 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-25 03:08:44.490820 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-25 03:08:44.490827 | orchestrator | 2026-03-25 03:08:44.490833 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-25 03:08:44.490840 | orchestrator | Wednesday 25 March 2026 03:08:33 +0000 (0:00:15.297) 0:00:25.298 ******* 2026-03-25 03:08:44.490846 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-25 03:08:44.490887 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-25 03:08:44.490895 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-25 03:08:44.490902 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-25 03:08:44.490932 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-25 03:08:44.490958 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-25 03:08:44.490970 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-25 03:08:44.490978 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-25 03:08:44.490985 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-25 03:08:44.490991 | orchestrator | 2026-03-25 03:08:44.490998 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-25 03:08:44.491005 | orchestrator | Wednesday 25 March 2026 03:08:36 +0000 (0:00:03.446) 0:00:28.745 ******* 2026-03-25 03:08:44.491012 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-25 03:08:44.491019 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.491029 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.491039 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-25 03:08:44.491045 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-25 03:08:44.491052 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-25 03:08:44.491059 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-25 03:08:44.491065 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-25 03:08:44.491072 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-25 03:08:44.491078 | orchestrator | 2026-03-25 03:08:44.491085 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:08:44.491098 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:08:44.491106 | orchestrator | 2026-03-25 03:08:44.491113 | orchestrator | 2026-03-25 03:08:44.491120 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:08:44.491126 | orchestrator | Wednesday 25 March 2026 03:08:44 +0000 (0:00:07.345) 0:00:36.090 ******* 2026-03-25 03:08:44.491133 | orchestrator | =============================================================================== 2026-03-25 03:08:44.491139 | orchestrator | Write ceph keys to the share directory --------------------------------- 15.30s 2026-03-25 03:08:44.491146 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.35s 2026-03-25 03:08:44.491152 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.56s 2026-03-25 03:08:44.491162 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.17s 2026-03-25 03:08:44.491173 | orchestrator | Check if target directories exist --------------------------------------- 3.45s 2026-03-25 03:08:44.491184 | orchestrator | Create share directory -------------------------------------------------- 1.09s 2026-03-25 03:08:57.388259 | orchestrator | 2026-03-25 03:08:57 | INFO  | Task 0e4fe448-27a7-4e9b-828b-1f08c7ac302f (cephclient) was prepared for execution. 2026-03-25 03:08:57.388374 | orchestrator | 2026-03-25 03:08:57 | INFO  | It takes a moment until task 0e4fe448-27a7-4e9b-828b-1f08c7ac302f (cephclient) has been started and output is visible here. 2026-03-25 03:09:56.733648 | orchestrator | 2026-03-25 03:09:56.733831 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-25 03:09:56.733856 | orchestrator | 2026-03-25 03:09:56.733942 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-25 03:09:56.733960 | orchestrator | Wednesday 25 March 2026 03:09:02 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-03-25 03:09:56.733976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-25 03:09:56.734162 | orchestrator | 2026-03-25 03:09:56.734182 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-25 03:09:56.734199 | orchestrator | Wednesday 25 March 2026 03:09:02 +0000 (0:00:00.254) 0:00:00.526 ******* 2026-03-25 03:09:56.734216 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-25 03:09:56.734232 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-25 03:09:56.734250 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-25 03:09:56.734266 | orchestrator | 2026-03-25 03:09:56.734283 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-25 03:09:56.734299 | orchestrator | Wednesday 25 March 2026 03:09:03 +0000 (0:00:01.349) 0:00:01.875 ******* 2026-03-25 03:09:56.734317 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-25 03:09:56.734333 | orchestrator | 2026-03-25 03:09:56.734349 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-25 03:09:56.734365 | orchestrator | Wednesday 25 March 2026 03:09:05 +0000 (0:00:01.542) 0:00:03.418 ******* 2026-03-25 03:09:56.734381 | orchestrator | changed: [testbed-manager] 2026-03-25 03:09:56.734397 | orchestrator | 2026-03-25 03:09:56.734413 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-25 03:09:56.734430 | orchestrator | Wednesday 25 March 2026 03:09:06 +0000 (0:00:00.923) 0:00:04.342 ******* 2026-03-25 03:09:56.734447 | orchestrator | changed: [testbed-manager] 2026-03-25 03:09:56.734464 | orchestrator | 2026-03-25 03:09:56.734480 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-25 03:09:56.734497 | orchestrator | Wednesday 25 March 2026 03:09:07 +0000 (0:00:01.002) 0:00:05.344 ******* 2026-03-25 03:09:56.734513 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-25 03:09:56.734529 | orchestrator | ok: [testbed-manager] 2026-03-25 03:09:56.734545 | orchestrator | 2026-03-25 03:09:56.734561 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-25 03:09:56.734577 | orchestrator | Wednesday 25 March 2026 03:09:46 +0000 (0:00:38.607) 0:00:43.951 ******* 2026-03-25 03:09:56.734594 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-25 03:09:56.734610 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-25 03:09:56.734626 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-25 03:09:56.734642 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-25 03:09:56.734659 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-25 03:09:56.734677 | orchestrator | 2026-03-25 03:09:56.734693 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-25 03:09:56.734710 | orchestrator | Wednesday 25 March 2026 03:09:50 +0000 (0:00:04.480) 0:00:48.432 ******* 2026-03-25 03:09:56.734726 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-25 03:09:56.734742 | orchestrator | 2026-03-25 03:09:56.734759 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-25 03:09:56.734775 | orchestrator | Wednesday 25 March 2026 03:09:50 +0000 (0:00:00.469) 0:00:48.901 ******* 2026-03-25 03:09:56.734791 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:09:56.734808 | orchestrator | 2026-03-25 03:09:56.734824 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-25 03:09:56.734840 | orchestrator | Wednesday 25 March 2026 03:09:51 +0000 (0:00:00.151) 0:00:49.053 ******* 2026-03-25 03:09:56.734853 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:09:56.734862 | orchestrator | 2026-03-25 03:09:56.734898 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-25 03:09:56.734909 | orchestrator | Wednesday 25 March 2026 03:09:51 +0000 (0:00:00.562) 0:00:49.615 ******* 2026-03-25 03:09:56.734958 | orchestrator | changed: [testbed-manager] 2026-03-25 03:09:56.734968 | orchestrator | 2026-03-25 03:09:56.734978 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-25 03:09:56.735008 | orchestrator | Wednesday 25 March 2026 03:09:53 +0000 (0:00:01.476) 0:00:51.092 ******* 2026-03-25 03:09:56.735018 | orchestrator | changed: [testbed-manager] 2026-03-25 03:09:56.735028 | orchestrator | 2026-03-25 03:09:56.735037 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-25 03:09:56.735047 | orchestrator | Wednesday 25 March 2026 03:09:53 +0000 (0:00:00.804) 0:00:51.897 ******* 2026-03-25 03:09:56.735056 | orchestrator | changed: [testbed-manager] 2026-03-25 03:09:56.735066 | orchestrator | 2026-03-25 03:09:56.735075 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-25 03:09:56.735085 | orchestrator | Wednesday 25 March 2026 03:09:54 +0000 (0:00:00.616) 0:00:52.513 ******* 2026-03-25 03:09:56.735095 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-25 03:09:56.735105 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-25 03:09:56.735114 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-25 03:09:56.735124 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-25 03:09:56.735133 | orchestrator | 2026-03-25 03:09:56.735144 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:09:56.735154 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 03:09:56.735165 | orchestrator | 2026-03-25 03:09:56.735175 | orchestrator | 2026-03-25 03:09:56.735210 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:09:56.735221 | orchestrator | Wednesday 25 March 2026 03:09:56 +0000 (0:00:01.659) 0:00:54.172 ******* 2026-03-25 03:09:56.735230 | orchestrator | =============================================================================== 2026-03-25 03:09:56.735240 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.61s 2026-03-25 03:09:56.735250 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.48s 2026-03-25 03:09:56.735259 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.66s 2026-03-25 03:09:56.735269 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.54s 2026-03-25 03:09:56.735278 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.48s 2026-03-25 03:09:56.735288 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.35s 2026-03-25 03:09:56.735297 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.00s 2026-03-25 03:09:56.735307 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.92s 2026-03-25 03:09:56.735316 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2026-03-25 03:09:56.735326 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-03-25 03:09:56.735335 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.56s 2026-03-25 03:09:56.735345 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-03-25 03:09:56.735354 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-03-25 03:09:56.735364 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-03-25 03:09:59.410394 | orchestrator | 2026-03-25 03:09:59 | INFO  | Task 3fc394a8-dafe-46aa-975d-2df4ba957c60 (ceph-bootstrap-dashboard) was prepared for execution. 2026-03-25 03:09:59.410504 | orchestrator | 2026-03-25 03:09:59 | INFO  | It takes a moment until task 3fc394a8-dafe-46aa-975d-2df4ba957c60 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-03-25 03:11:17.517221 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-25 03:11:17.517342 | orchestrator | 2.16.14 2026-03-25 03:11:17.517352 | orchestrator | 2026-03-25 03:11:17.517358 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-25 03:11:17.517363 | orchestrator | 2026-03-25 03:11:17.517367 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-25 03:11:17.517391 | orchestrator | Wednesday 25 March 2026 03:10:04 +0000 (0:00:00.324) 0:00:00.324 ******* 2026-03-25 03:11:17.517396 | orchestrator | changed: [testbed-manager] 2026-03-25 03:11:17.517401 | orchestrator | 2026-03-25 03:11:17.517405 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-25 03:11:17.517409 | orchestrator | Wednesday 25 March 2026 03:10:06 +0000 (0:00:01.845) 0:00:02.169 ******* 2026-03-25 03:11:17.517414 | orchestrator | changed: [testbed-manager] 2026-03-25 03:11:17.517418 | orchestrator | 2026-03-25 03:11:17.517422 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-25 03:11:17.517426 | orchestrator | Wednesday 25 March 2026 03:10:07 +0000 (0:00:01.111) 0:00:03.280 ******* 2026-03-25 03:11:17.517430 | orchestrator | changed: [testbed-manager] 2026-03-25 03:11:17.517434 | orchestrator | 2026-03-25 03:11:17.517439 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-25 03:11:17.517443 | orchestrator | Wednesday 25 March 2026 03:10:08 +0000 (0:00:01.134) 0:00:04.415 ******* 2026-03-25 03:11:17.517447 | orchestrator | changed: [testbed-manager] 2026-03-25 03:11:17.517451 | orchestrator | 2026-03-25 03:11:17.517456 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-25 03:11:17.517463 | orchestrator | Wednesday 25 March 2026 03:10:09 +0000 (0:00:01.316) 0:00:05.732 ******* 2026-03-25 03:11:17.517469 | orchestrator | changed: [testbed-manager] 2026-03-25 03:11:17.517476 | orchestrator | 2026-03-25 03:11:17.517482 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-25 03:11:17.517488 | orchestrator | Wednesday 25 March 2026 03:10:10 +0000 (0:00:01.120) 0:00:06.853 ******* 2026-03-25 03:11:17.517509 | orchestrator | changed: [testbed-manager] 2026-03-25 03:11:17.517515 | orchestrator | 2026-03-25 03:11:17.517521 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-25 03:11:17.517527 | orchestrator | Wednesday 25 March 2026 03:10:11 +0000 (0:00:01.146) 0:00:08.000 ******* 2026-03-25 03:11:17.517533 | orchestrator | changed: [testbed-manager] 2026-03-25 03:11:17.517539 | orchestrator | 2026-03-25 03:11:17.517545 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-25 03:11:17.517551 | orchestrator | Wednesday 25 March 2026 03:10:14 +0000 (0:00:02.094) 0:00:10.094 ******* 2026-03-25 03:11:17.517557 | orchestrator | changed: [testbed-manager] 2026-03-25 03:11:17.517562 | orchestrator | 2026-03-25 03:11:17.517568 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-25 03:11:17.517575 | orchestrator | Wednesday 25 March 2026 03:10:15 +0000 (0:00:01.272) 0:00:11.366 ******* 2026-03-25 03:11:17.517582 | orchestrator | changed: [testbed-manager] 2026-03-25 03:11:17.517587 | orchestrator | 2026-03-25 03:11:17.517594 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-25 03:11:17.517600 | orchestrator | Wednesday 25 March 2026 03:10:52 +0000 (0:00:37.079) 0:00:48.446 ******* 2026-03-25 03:11:17.517606 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:11:17.517612 | orchestrator | 2026-03-25 03:11:17.517619 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-25 03:11:17.517626 | orchestrator | 2026-03-25 03:11:17.517632 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-25 03:11:17.517639 | orchestrator | Wednesday 25 March 2026 03:10:52 +0000 (0:00:00.185) 0:00:48.632 ******* 2026-03-25 03:11:17.517645 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:11:17.517651 | orchestrator | 2026-03-25 03:11:17.517657 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-25 03:11:17.517665 | orchestrator | 2026-03-25 03:11:17.517672 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-25 03:11:17.517679 | orchestrator | Wednesday 25 March 2026 03:10:54 +0000 (0:00:01.737) 0:00:50.369 ******* 2026-03-25 03:11:17.517685 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:11:17.517692 | orchestrator | 2026-03-25 03:11:17.517699 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-25 03:11:17.517715 | orchestrator | 2026-03-25 03:11:17.517723 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-25 03:11:17.517730 | orchestrator | Wednesday 25 March 2026 03:11:05 +0000 (0:00:11.303) 0:01:01.673 ******* 2026-03-25 03:11:17.517737 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:11:17.517744 | orchestrator | 2026-03-25 03:11:17.517751 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:11:17.517760 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 03:11:17.517768 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:11:17.517776 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:11:17.517782 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:11:17.517788 | orchestrator | 2026-03-25 03:11:17.517796 | orchestrator | 2026-03-25 03:11:17.517800 | orchestrator | 2026-03-25 03:11:17.517805 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:11:17.517809 | orchestrator | Wednesday 25 March 2026 03:11:17 +0000 (0:00:11.355) 0:01:13.029 ******* 2026-03-25 03:11:17.517813 | orchestrator | =============================================================================== 2026-03-25 03:11:17.517817 | orchestrator | Create admin user ------------------------------------------------------ 37.08s 2026-03-25 03:11:17.517836 | orchestrator | Restart ceph manager service ------------------------------------------- 24.40s 2026-03-25 03:11:17.517841 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2026-03-25 03:11:17.517845 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.85s 2026-03-25 03:11:17.517849 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.32s 2026-03-25 03:11:17.517853 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.27s 2026-03-25 03:11:17.517857 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.15s 2026-03-25 03:11:17.517861 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.13s 2026-03-25 03:11:17.517865 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.12s 2026-03-25 03:11:17.517871 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.11s 2026-03-25 03:11:17.517878 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2026-03-25 03:11:17.925803 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-03-25 03:11:20.375489 | orchestrator | 2026-03-25 03:11:20 | INFO  | Task ec51b163-2a74-4742-b5d0-2dd2ffa0f839 (keystone) was prepared for execution. 2026-03-25 03:11:20.375599 | orchestrator | 2026-03-25 03:11:20 | INFO  | It takes a moment until task ec51b163-2a74-4742-b5d0-2dd2ffa0f839 (keystone) has been started and output is visible here. 2026-03-25 03:11:28.470289 | orchestrator | 2026-03-25 03:11:28.470395 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:11:28.470406 | orchestrator | 2026-03-25 03:11:28.470413 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:11:28.470433 | orchestrator | Wednesday 25 March 2026 03:11:25 +0000 (0:00:00.324) 0:00:00.324 ******* 2026-03-25 03:11:28.470465 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:11:28.470473 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:11:28.470480 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:11:28.470486 | orchestrator | 2026-03-25 03:11:28.470492 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:11:28.470498 | orchestrator | Wednesday 25 March 2026 03:11:25 +0000 (0:00:00.345) 0:00:00.670 ******* 2026-03-25 03:11:28.470522 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-25 03:11:28.470529 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-25 03:11:28.470534 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-25 03:11:28.470540 | orchestrator | 2026-03-25 03:11:28.470546 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-25 03:11:28.470552 | orchestrator | 2026-03-25 03:11:28.470558 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-25 03:11:28.470563 | orchestrator | Wednesday 25 March 2026 03:11:26 +0000 (0:00:00.494) 0:00:01.165 ******* 2026-03-25 03:11:28.470570 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:11:28.470576 | orchestrator | 2026-03-25 03:11:28.470582 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-25 03:11:28.470588 | orchestrator | Wednesday 25 March 2026 03:11:26 +0000 (0:00:00.653) 0:00:01.818 ******* 2026-03-25 03:11:28.470598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:28.470608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:28.470633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:28.470645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:11:28.470654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:11:28.470660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:11:28.470666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:28.470673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:28.470679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:28.470696 | orchestrator | 2026-03-25 03:11:28.470706 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-25 03:11:28.470722 | orchestrator | Wednesday 25 March 2026 03:11:28 +0000 (0:00:01.709) 0:00:03.528 ******* 2026-03-25 03:11:34.392827 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:11:34.392969 | orchestrator | 2026-03-25 03:11:34.392986 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-25 03:11:34.393013 | orchestrator | Wednesday 25 March 2026 03:11:28 +0000 (0:00:00.347) 0:00:03.875 ******* 2026-03-25 03:11:34.393023 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:11:34.393033 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:11:34.393043 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:11:34.393052 | orchestrator | 2026-03-25 03:11:34.393062 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-25 03:11:34.393072 | orchestrator | Wednesday 25 March 2026 03:11:29 +0000 (0:00:00.365) 0:00:04.241 ******* 2026-03-25 03:11:34.393081 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:11:34.393091 | orchestrator | 2026-03-25 03:11:34.393100 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-25 03:11:34.393109 | orchestrator | Wednesday 25 March 2026 03:11:30 +0000 (0:00:00.901) 0:00:05.142 ******* 2026-03-25 03:11:34.393120 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:11:34.393129 | orchestrator | 2026-03-25 03:11:34.393139 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-25 03:11:34.393148 | orchestrator | Wednesday 25 March 2026 03:11:30 +0000 (0:00:00.652) 0:00:05.795 ******* 2026-03-25 03:11:34.393163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:34.393179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:34.393190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:34.393250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:11:34.393273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:11:34.393292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:11:34.393309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:34.393328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:34.393356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:34.393373 | orchestrator | 2026-03-25 03:11:34.393389 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-25 03:11:34.393407 | orchestrator | Wednesday 25 March 2026 03:11:33 +0000 (0:00:03.025) 0:00:08.821 ******* 2026-03-25 03:11:34.393439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 03:11:35.343915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:35.344003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 03:11:35.344014 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:11:35.344023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 03:11:35.344049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:35.344061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 03:11:35.344068 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:11:35.344091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 03:11:35.344097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:35.344101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 03:11:35.344109 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:11:35.344116 | orchestrator | 2026-03-25 03:11:35.344123 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-25 03:11:35.344131 | orchestrator | Wednesday 25 March 2026 03:11:34 +0000 (0:00:00.640) 0:00:09.461 ******* 2026-03-25 03:11:35.344137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 03:11:35.344147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:35.344161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 03:11:38.661531 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:11:38.661656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 03:11:38.661696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:38.661746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 03:11:38.661766 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:11:38.661804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 03:11:38.661824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:38.661855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 03:11:38.661866 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:11:38.661881 | orchestrator | 2026-03-25 03:11:38.661958 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-25 03:11:38.661976 | orchestrator | Wednesday 25 March 2026 03:11:35 +0000 (0:00:00.955) 0:00:10.416 ******* 2026-03-25 03:11:38.661995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:38.662150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:38.662180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:38.662216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:11:43.911094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:11:43.911210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:11:43.911222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:43.911232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:43.911255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:43.911269 | orchestrator | 2026-03-25 03:11:43.911279 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-25 03:11:43.911287 | orchestrator | Wednesday 25 March 2026 03:11:38 +0000 (0:00:03.311) 0:00:13.728 ******* 2026-03-25 03:11:43.911312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:43.911322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:43.911339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:43.911347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:43.911359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:11:43.911374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:47.847435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:47.847587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:47.847608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:11:47.847618 | orchestrator | 2026-03-25 03:11:47.847629 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-25 03:11:47.847639 | orchestrator | Wednesday 25 March 2026 03:11:43 +0000 (0:00:05.252) 0:00:18.980 ******* 2026-03-25 03:11:47.847647 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:11:47.847657 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:11:47.847665 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:11:47.847672 | orchestrator | 2026-03-25 03:11:47.847681 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-25 03:11:47.847689 | orchestrator | Wednesday 25 March 2026 03:11:45 +0000 (0:00:01.467) 0:00:20.447 ******* 2026-03-25 03:11:47.847697 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:11:47.847705 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:11:47.847712 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:11:47.847720 | orchestrator | 2026-03-25 03:11:47.847728 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-25 03:11:47.847736 | orchestrator | Wednesday 25 March 2026 03:11:46 +0000 (0:00:00.873) 0:00:21.321 ******* 2026-03-25 03:11:47.847744 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:11:47.847752 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:11:47.847759 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:11:47.847767 | orchestrator | 2026-03-25 03:11:47.847789 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-25 03:11:47.847797 | orchestrator | Wednesday 25 March 2026 03:11:46 +0000 (0:00:00.584) 0:00:21.906 ******* 2026-03-25 03:11:47.847805 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:11:47.847813 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:11:47.847820 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:11:47.847828 | orchestrator | 2026-03-25 03:11:47.847837 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-25 03:11:47.847845 | orchestrator | Wednesday 25 March 2026 03:11:47 +0000 (0:00:00.371) 0:00:22.277 ******* 2026-03-25 03:11:47.847874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 03:11:47.847936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:47.847949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 03:11:47.847959 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:11:47.847969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 03:11:47.847984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:11:47.847994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 03:11:47.848013 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:11:47.848033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-25 03:12:07.386194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 03:12:07.386296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 03:12:07.386308 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:12:07.386317 | orchestrator | 2026-03-25 03:12:07.386324 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-25 03:12:07.386332 | orchestrator | Wednesday 25 March 2026 03:11:47 +0000 (0:00:00.637) 0:00:22.914 ******* 2026-03-25 03:12:07.386339 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:12:07.386345 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:12:07.386351 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:12:07.386358 | orchestrator | 2026-03-25 03:12:07.386364 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-25 03:12:07.386370 | orchestrator | Wednesday 25 March 2026 03:11:48 +0000 (0:00:00.345) 0:00:23.260 ******* 2026-03-25 03:12:07.386377 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-25 03:12:07.386385 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-25 03:12:07.386409 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-25 03:12:07.386416 | orchestrator | 2026-03-25 03:12:07.386435 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-25 03:12:07.386441 | orchestrator | Wednesday 25 March 2026 03:11:50 +0000 (0:00:01.922) 0:00:25.182 ******* 2026-03-25 03:12:07.386447 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:12:07.386454 | orchestrator | 2026-03-25 03:12:07.386460 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-25 03:12:07.386466 | orchestrator | Wednesday 25 March 2026 03:11:51 +0000 (0:00:01.101) 0:00:26.284 ******* 2026-03-25 03:12:07.386472 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:12:07.386478 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:12:07.386483 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:12:07.386489 | orchestrator | 2026-03-25 03:12:07.386495 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-25 03:12:07.386501 | orchestrator | Wednesday 25 March 2026 03:11:51 +0000 (0:00:00.604) 0:00:26.889 ******* 2026-03-25 03:12:07.386507 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:12:07.386513 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-25 03:12:07.386520 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-25 03:12:07.386526 | orchestrator | 2026-03-25 03:12:07.386533 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-25 03:12:07.386540 | orchestrator | Wednesday 25 March 2026 03:11:52 +0000 (0:00:01.146) 0:00:28.035 ******* 2026-03-25 03:12:07.386546 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:12:07.386554 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:12:07.386560 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:12:07.386566 | orchestrator | 2026-03-25 03:12:07.386573 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-25 03:12:07.386579 | orchestrator | Wednesday 25 March 2026 03:11:53 +0000 (0:00:00.617) 0:00:28.653 ******* 2026-03-25 03:12:07.386585 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-25 03:12:07.386592 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-25 03:12:07.386598 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-25 03:12:07.386605 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-25 03:12:07.386611 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-25 03:12:07.386617 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-25 03:12:07.386624 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-25 03:12:07.386630 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-25 03:12:07.386650 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-25 03:12:07.386657 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-25 03:12:07.386663 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-25 03:12:07.386669 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-25 03:12:07.386675 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-25 03:12:07.386681 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-25 03:12:07.386687 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-25 03:12:07.386694 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-25 03:12:07.386707 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-25 03:12:07.386713 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-25 03:12:07.386720 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-25 03:12:07.386726 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-25 03:12:07.386732 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-25 03:12:07.386739 | orchestrator | 2026-03-25 03:12:07.386745 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-25 03:12:07.386751 | orchestrator | Wednesday 25 March 2026 03:12:02 +0000 (0:00:08.834) 0:00:37.487 ******* 2026-03-25 03:12:07.386757 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-25 03:12:07.386764 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-25 03:12:07.386770 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-25 03:12:07.386777 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-25 03:12:07.386783 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-25 03:12:07.386790 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-25 03:12:07.386796 | orchestrator | 2026-03-25 03:12:07.386803 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-25 03:12:07.386813 | orchestrator | Wednesday 25 March 2026 03:12:05 +0000 (0:00:02.655) 0:00:40.143 ******* 2026-03-25 03:12:07.386821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:12:07.386835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:13:35.250113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-25 03:13:35.250226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:13:35.250247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:13:35.250254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-25 03:13:35.250260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:13:35.250279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:13:35.250291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-25 03:13:35.250297 | orchestrator | 2026-03-25 03:13:35.250304 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-25 03:13:35.250310 | orchestrator | Wednesday 25 March 2026 03:12:07 +0000 (0:00:02.312) 0:00:42.455 ******* 2026-03-25 03:13:35.250316 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:13:35.250322 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:13:35.250327 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:13:35.250332 | orchestrator | 2026-03-25 03:13:35.250337 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-25 03:13:35.250343 | orchestrator | Wednesday 25 March 2026 03:12:07 +0000 (0:00:00.601) 0:00:43.057 ******* 2026-03-25 03:13:35.250348 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:13:35.250353 | orchestrator | 2026-03-25 03:13:35.250358 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-25 03:13:35.250363 | orchestrator | Wednesday 25 March 2026 03:12:10 +0000 (0:00:02.123) 0:00:45.180 ******* 2026-03-25 03:13:35.250368 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:13:35.250373 | orchestrator | 2026-03-25 03:13:35.250378 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-25 03:13:35.250387 | orchestrator | Wednesday 25 March 2026 03:12:12 +0000 (0:00:02.100) 0:00:47.281 ******* 2026-03-25 03:13:35.250395 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:13:35.250404 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:13:35.250412 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:13:35.250421 | orchestrator | 2026-03-25 03:13:35.250434 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-25 03:13:35.250443 | orchestrator | Wednesday 25 March 2026 03:12:13 +0000 (0:00:00.854) 0:00:48.136 ******* 2026-03-25 03:13:35.250452 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:13:35.250460 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:13:35.250469 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:13:35.250477 | orchestrator | 2026-03-25 03:13:35.250486 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-25 03:13:35.250501 | orchestrator | Wednesday 25 March 2026 03:12:13 +0000 (0:00:00.411) 0:00:48.547 ******* 2026-03-25 03:13:35.250509 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:13:35.250519 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:13:35.250527 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:13:35.250535 | orchestrator | 2026-03-25 03:13:35.250543 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-25 03:13:35.250552 | orchestrator | Wednesday 25 March 2026 03:12:14 +0000 (0:00:00.624) 0:00:49.172 ******* 2026-03-25 03:13:35.250561 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:13:35.250570 | orchestrator | 2026-03-25 03:13:35.250578 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-25 03:13:35.250587 | orchestrator | Wednesday 25 March 2026 03:12:27 +0000 (0:00:13.433) 0:01:02.605 ******* 2026-03-25 03:13:35.250595 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:13:35.250600 | orchestrator | 2026-03-25 03:13:35.250605 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-25 03:13:35.250610 | orchestrator | Wednesday 25 March 2026 03:12:37 +0000 (0:00:09.792) 0:01:12.398 ******* 2026-03-25 03:13:35.250622 | orchestrator | 2026-03-25 03:13:35.250627 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-25 03:13:35.250634 | orchestrator | Wednesday 25 March 2026 03:12:37 +0000 (0:00:00.071) 0:01:12.470 ******* 2026-03-25 03:13:35.250642 | orchestrator | 2026-03-25 03:13:35.250649 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-25 03:13:35.250657 | orchestrator | Wednesday 25 March 2026 03:12:37 +0000 (0:00:00.080) 0:01:12.550 ******* 2026-03-25 03:13:35.250664 | orchestrator | 2026-03-25 03:13:35.250671 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-25 03:13:35.250679 | orchestrator | Wednesday 25 March 2026 03:12:37 +0000 (0:00:00.081) 0:01:12.632 ******* 2026-03-25 03:13:35.250687 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:13:35.250695 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:13:35.250703 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:13:35.250711 | orchestrator | 2026-03-25 03:13:35.250720 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-25 03:13:35.250728 | orchestrator | Wednesday 25 March 2026 03:13:22 +0000 (0:00:44.811) 0:01:57.443 ******* 2026-03-25 03:13:35.250736 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:13:35.250744 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:13:35.250752 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:13:35.250762 | orchestrator | 2026-03-25 03:13:35.250770 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-25 03:13:35.250777 | orchestrator | Wednesday 25 March 2026 03:13:27 +0000 (0:00:05.222) 0:02:02.666 ******* 2026-03-25 03:13:35.250786 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:13:35.250793 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:13:35.250798 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:13:35.250803 | orchestrator | 2026-03-25 03:13:35.250808 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-25 03:13:35.250813 | orchestrator | Wednesday 25 March 2026 03:13:34 +0000 (0:00:06.985) 0:02:09.651 ******* 2026-03-25 03:13:35.250825 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:14:22.521818 | orchestrator | 2026-03-25 03:14:22.522129 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-25 03:14:22.522169 | orchestrator | Wednesday 25 March 2026 03:13:35 +0000 (0:00:00.667) 0:02:10.319 ******* 2026-03-25 03:14:22.522189 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:14:22.522210 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:14:22.522229 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:14:22.522247 | orchestrator | 2026-03-25 03:14:22.522266 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-25 03:14:22.522284 | orchestrator | Wednesday 25 March 2026 03:13:36 +0000 (0:00:01.262) 0:02:11.582 ******* 2026-03-25 03:14:22.522304 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:14:22.522327 | orchestrator | 2026-03-25 03:14:22.522346 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-25 03:14:22.522367 | orchestrator | Wednesday 25 March 2026 03:13:38 +0000 (0:00:01.825) 0:02:13.407 ******* 2026-03-25 03:14:22.522385 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-25 03:14:22.522408 | orchestrator | 2026-03-25 03:14:22.522429 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-25 03:14:22.522449 | orchestrator | Wednesday 25 March 2026 03:13:48 +0000 (0:00:09.989) 0:02:23.397 ******* 2026-03-25 03:14:22.522468 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-25 03:14:22.522488 | orchestrator | 2026-03-25 03:14:22.522509 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-25 03:14:22.522528 | orchestrator | Wednesday 25 March 2026 03:14:11 +0000 (0:00:23.228) 0:02:46.626 ******* 2026-03-25 03:14:22.522547 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-25 03:14:22.522610 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-25 03:14:22.522632 | orchestrator | 2026-03-25 03:14:22.522652 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-25 03:14:22.522671 | orchestrator | Wednesday 25 March 2026 03:14:17 +0000 (0:00:05.646) 0:02:52.273 ******* 2026-03-25 03:14:22.522688 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:14:22.522707 | orchestrator | 2026-03-25 03:14:22.522724 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-25 03:14:22.522743 | orchestrator | Wednesday 25 March 2026 03:14:17 +0000 (0:00:00.167) 0:02:52.440 ******* 2026-03-25 03:14:22.522760 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:14:22.522778 | orchestrator | 2026-03-25 03:14:22.522795 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-25 03:14:22.522813 | orchestrator | Wednesday 25 March 2026 03:14:17 +0000 (0:00:00.122) 0:02:52.562 ******* 2026-03-25 03:14:22.522830 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:14:22.522849 | orchestrator | 2026-03-25 03:14:22.522888 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-25 03:14:22.522907 | orchestrator | Wednesday 25 March 2026 03:14:17 +0000 (0:00:00.167) 0:02:52.729 ******* 2026-03-25 03:14:22.522925 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:14:22.523050 | orchestrator | 2026-03-25 03:14:22.523070 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-25 03:14:22.523089 | orchestrator | Wednesday 25 March 2026 03:14:18 +0000 (0:00:00.591) 0:02:53.320 ******* 2026-03-25 03:14:22.523107 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:14:22.523124 | orchestrator | 2026-03-25 03:14:22.523135 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-25 03:14:22.523146 | orchestrator | Wednesday 25 March 2026 03:14:21 +0000 (0:00:03.269) 0:02:56.590 ******* 2026-03-25 03:14:22.523157 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:14:22.523167 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:14:22.523178 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:14:22.523189 | orchestrator | 2026-03-25 03:14:22.523200 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:14:22.523212 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 03:14:22.523225 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-25 03:14:22.523236 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-25 03:14:22.523247 | orchestrator | 2026-03-25 03:14:22.523258 | orchestrator | 2026-03-25 03:14:22.523269 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:14:22.523280 | orchestrator | Wednesday 25 March 2026 03:14:22 +0000 (0:00:00.506) 0:02:57.096 ******* 2026-03-25 03:14:22.523291 | orchestrator | =============================================================================== 2026-03-25 03:14:22.523301 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 44.81s 2026-03-25 03:14:22.523312 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.23s 2026-03-25 03:14:22.523323 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.43s 2026-03-25 03:14:22.523334 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.99s 2026-03-25 03:14:22.523344 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.79s 2026-03-25 03:14:22.523355 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.83s 2026-03-25 03:14:22.523366 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.99s 2026-03-25 03:14:22.523377 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.65s 2026-03-25 03:14:22.523401 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.25s 2026-03-25 03:14:22.523437 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.22s 2026-03-25 03:14:22.523448 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.31s 2026-03-25 03:14:22.523457 | orchestrator | keystone : Creating default user role ----------------------------------- 3.27s 2026-03-25 03:14:22.523467 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.03s 2026-03-25 03:14:22.523476 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.66s 2026-03-25 03:14:22.523486 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.31s 2026-03-25 03:14:22.523495 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.12s 2026-03-25 03:14:22.523505 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.10s 2026-03-25 03:14:22.523514 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.92s 2026-03-25 03:14:22.523524 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.83s 2026-03-25 03:14:22.523533 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.71s 2026-03-25 03:14:25.197607 | orchestrator | 2026-03-25 03:14:25 | INFO  | Task 1058a961-8b1f-4e5f-bee8-9c338881c141 (placement) was prepared for execution. 2026-03-25 03:14:25.197761 | orchestrator | 2026-03-25 03:14:25 | INFO  | It takes a moment until task 1058a961-8b1f-4e5f-bee8-9c338881c141 (placement) has been started and output is visible here. 2026-03-25 03:15:00.420087 | orchestrator | 2026-03-25 03:15:00.420207 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:15:00.420221 | orchestrator | 2026-03-25 03:15:00.420230 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:15:00.420237 | orchestrator | Wednesday 25 March 2026 03:14:29 +0000 (0:00:00.294) 0:00:00.294 ******* 2026-03-25 03:15:00.420245 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:15:00.420254 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:15:00.420263 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:15:00.420271 | orchestrator | 2026-03-25 03:15:00.420278 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:15:00.420286 | orchestrator | Wednesday 25 March 2026 03:14:30 +0000 (0:00:00.338) 0:00:00.632 ******* 2026-03-25 03:15:00.420294 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-25 03:15:00.420303 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-25 03:15:00.420310 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-25 03:15:00.420318 | orchestrator | 2026-03-25 03:15:00.420340 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-25 03:15:00.420349 | orchestrator | 2026-03-25 03:15:00.420357 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-25 03:15:00.420364 | orchestrator | Wednesday 25 March 2026 03:14:30 +0000 (0:00:00.502) 0:00:01.135 ******* 2026-03-25 03:15:00.420372 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:15:00.420380 | orchestrator | 2026-03-25 03:15:00.420388 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-25 03:15:00.420395 | orchestrator | Wednesday 25 March 2026 03:14:31 +0000 (0:00:00.613) 0:00:01.748 ******* 2026-03-25 03:15:00.420402 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-25 03:15:00.420409 | orchestrator | 2026-03-25 03:15:00.420417 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-25 03:15:00.420424 | orchestrator | Wednesday 25 March 2026 03:14:35 +0000 (0:00:03.632) 0:00:05.381 ******* 2026-03-25 03:15:00.420431 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-25 03:15:00.420463 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-25 03:15:00.420471 | orchestrator | 2026-03-25 03:15:00.420478 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-25 03:15:00.420485 | orchestrator | Wednesday 25 March 2026 03:14:41 +0000 (0:00:06.431) 0:00:11.813 ******* 2026-03-25 03:15:00.420493 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-25 03:15:00.420500 | orchestrator | 2026-03-25 03:15:00.420508 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-25 03:15:00.420515 | orchestrator | Wednesday 25 March 2026 03:14:45 +0000 (0:00:03.560) 0:00:15.373 ******* 2026-03-25 03:15:00.420523 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:15:00.420530 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-25 03:15:00.420538 | orchestrator | 2026-03-25 03:15:00.420545 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-25 03:15:00.420553 | orchestrator | Wednesday 25 March 2026 03:14:48 +0000 (0:00:03.895) 0:00:19.268 ******* 2026-03-25 03:15:00.420560 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:15:00.420568 | orchestrator | 2026-03-25 03:15:00.420575 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-25 03:15:00.420582 | orchestrator | Wednesday 25 March 2026 03:14:51 +0000 (0:00:03.084) 0:00:22.353 ******* 2026-03-25 03:15:00.420589 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-25 03:15:00.420597 | orchestrator | 2026-03-25 03:15:00.420604 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-25 03:15:00.420612 | orchestrator | Wednesday 25 March 2026 03:14:55 +0000 (0:00:04.000) 0:00:26.353 ******* 2026-03-25 03:15:00.420619 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:15:00.420627 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:15:00.420634 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:15:00.420641 | orchestrator | 2026-03-25 03:15:00.420649 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-25 03:15:00.420656 | orchestrator | Wednesday 25 March 2026 03:14:56 +0000 (0:00:00.333) 0:00:26.687 ******* 2026-03-25 03:15:00.420668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:00.420701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:00.420717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:00.420725 | orchestrator | 2026-03-25 03:15:00.420733 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-25 03:15:00.420741 | orchestrator | Wednesday 25 March 2026 03:14:57 +0000 (0:00:01.096) 0:00:27.784 ******* 2026-03-25 03:15:00.420748 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:15:00.420756 | orchestrator | 2026-03-25 03:15:00.420763 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-25 03:15:00.420770 | orchestrator | Wednesday 25 March 2026 03:14:57 +0000 (0:00:00.355) 0:00:28.139 ******* 2026-03-25 03:15:00.420777 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:15:00.420785 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:15:00.420792 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:15:00.420800 | orchestrator | 2026-03-25 03:15:00.420807 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-25 03:15:00.420814 | orchestrator | Wednesday 25 March 2026 03:14:58 +0000 (0:00:00.346) 0:00:28.486 ******* 2026-03-25 03:15:00.420821 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:15:00.420829 | orchestrator | 2026-03-25 03:15:00.420836 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-25 03:15:00.420843 | orchestrator | Wednesday 25 March 2026 03:14:58 +0000 (0:00:00.642) 0:00:29.128 ******* 2026-03-25 03:15:00.420850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:00.420864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:03.430152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:03.430263 | orchestrator | 2026-03-25 03:15:03.430282 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-25 03:15:03.430294 | orchestrator | Wednesday 25 March 2026 03:15:00 +0000 (0:00:01.642) 0:00:30.771 ******* 2026-03-25 03:15:03.430306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 03:15:03.430318 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:15:03.430331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 03:15:03.430341 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:15:03.430353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 03:15:03.430388 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:15:03.430399 | orchestrator | 2026-03-25 03:15:03.430410 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-25 03:15:03.430442 | orchestrator | Wednesday 25 March 2026 03:15:00 +0000 (0:00:00.566) 0:00:31.337 ******* 2026-03-25 03:15:03.430461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 03:15:03.430468 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:15:03.430475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 03:15:03.430482 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:15:03.430488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 03:15:03.430495 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:15:03.430501 | orchestrator | 2026-03-25 03:15:03.430507 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-25 03:15:03.430513 | orchestrator | Wednesday 25 March 2026 03:15:01 +0000 (0:00:00.775) 0:00:32.112 ******* 2026-03-25 03:15:03.430520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:03.430542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:10.530875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:10.531094 | orchestrator | 2026-03-25 03:15:10.531113 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-25 03:15:10.531120 | orchestrator | Wednesday 25 March 2026 03:15:03 +0000 (0:00:01.671) 0:00:33.784 ******* 2026-03-25 03:15:10.531127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:10.531134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:10.531176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:10.531190 | orchestrator | 2026-03-25 03:15:10.531202 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-25 03:15:10.531211 | orchestrator | Wednesday 25 March 2026 03:15:05 +0000 (0:00:02.398) 0:00:36.182 ******* 2026-03-25 03:15:10.531239 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-25 03:15:10.531250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-25 03:15:10.531259 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-25 03:15:10.531268 | orchestrator | 2026-03-25 03:15:10.531277 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-25 03:15:10.531286 | orchestrator | Wednesday 25 March 2026 03:15:07 +0000 (0:00:01.515) 0:00:37.697 ******* 2026-03-25 03:15:10.531296 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:15:10.531307 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:15:10.531317 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:15:10.531326 | orchestrator | 2026-03-25 03:15:10.531335 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-25 03:15:10.531344 | orchestrator | Wednesday 25 March 2026 03:15:08 +0000 (0:00:01.351) 0:00:39.049 ******* 2026-03-25 03:15:10.531352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 03:15:10.531362 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:15:10.531371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 03:15:10.531389 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:15:10.531399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-25 03:15:10.531409 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:15:10.531418 | orchestrator | 2026-03-25 03:15:10.531428 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-25 03:15:10.531444 | orchestrator | Wednesday 25 March 2026 03:15:09 +0000 (0:00:00.847) 0:00:39.897 ******* 2026-03-25 03:15:10.531466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:33.025543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:33.025702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-25 03:15:33.025722 | orchestrator | 2026-03-25 03:15:33.025734 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-25 03:15:33.025746 | orchestrator | Wednesday 25 March 2026 03:15:10 +0000 (0:00:00.992) 0:00:40.889 ******* 2026-03-25 03:15:33.025755 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:15:33.025766 | orchestrator | 2026-03-25 03:15:33.025776 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-25 03:15:33.025785 | orchestrator | Wednesday 25 March 2026 03:15:12 +0000 (0:00:01.776) 0:00:42.666 ******* 2026-03-25 03:15:33.025795 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:15:33.025804 | orchestrator | 2026-03-25 03:15:33.025813 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-25 03:15:33.025823 | orchestrator | Wednesday 25 March 2026 03:15:14 +0000 (0:00:01.866) 0:00:44.533 ******* 2026-03-25 03:15:33.025833 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:15:33.025842 | orchestrator | 2026-03-25 03:15:33.025852 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-25 03:15:33.025861 | orchestrator | Wednesday 25 March 2026 03:15:27 +0000 (0:00:13.041) 0:00:57.574 ******* 2026-03-25 03:15:33.025872 | orchestrator | 2026-03-25 03:15:33.025881 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-25 03:15:33.025891 | orchestrator | Wednesday 25 March 2026 03:15:27 +0000 (0:00:00.079) 0:00:57.654 ******* 2026-03-25 03:15:33.025900 | orchestrator | 2026-03-25 03:15:33.025909 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-25 03:15:33.025919 | orchestrator | Wednesday 25 March 2026 03:15:27 +0000 (0:00:00.080) 0:00:57.734 ******* 2026-03-25 03:15:33.025928 | orchestrator | 2026-03-25 03:15:33.025937 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-25 03:15:33.026137 | orchestrator | Wednesday 25 March 2026 03:15:27 +0000 (0:00:00.077) 0:00:57.812 ******* 2026-03-25 03:15:33.026154 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:15:33.026183 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:15:33.026194 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:15:33.026205 | orchestrator | 2026-03-25 03:15:33.026215 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:15:33.026227 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 03:15:33.026238 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 03:15:33.026249 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 03:15:33.026259 | orchestrator | 2026-03-25 03:15:33.026269 | orchestrator | 2026-03-25 03:15:33.026278 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:15:33.026290 | orchestrator | Wednesday 25 March 2026 03:15:32 +0000 (0:00:05.175) 0:01:02.987 ******* 2026-03-25 03:15:33.026312 | orchestrator | =============================================================================== 2026-03-25 03:15:33.026323 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.04s 2026-03-25 03:15:33.026354 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.43s 2026-03-25 03:15:33.026365 | orchestrator | placement : Restart placement-api container ----------------------------- 5.18s 2026-03-25 03:15:33.026377 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.00s 2026-03-25 03:15:33.026387 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.90s 2026-03-25 03:15:33.026398 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.63s 2026-03-25 03:15:33.026408 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.56s 2026-03-25 03:15:33.026418 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.08s 2026-03-25 03:15:33.026430 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.40s 2026-03-25 03:15:33.026440 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.87s 2026-03-25 03:15:33.026450 | orchestrator | placement : Creating placement databases -------------------------------- 1.78s 2026-03-25 03:15:33.026461 | orchestrator | placement : Copying over config.json files for services ----------------- 1.67s 2026-03-25 03:15:33.026471 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.64s 2026-03-25 03:15:33.026482 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.52s 2026-03-25 03:15:33.026492 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.35s 2026-03-25 03:15:33.026502 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.10s 2026-03-25 03:15:33.026512 | orchestrator | placement : Check placement containers ---------------------------------- 0.99s 2026-03-25 03:15:33.026522 | orchestrator | placement : Copying over existing policy file --------------------------- 0.85s 2026-03-25 03:15:33.026533 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.78s 2026-03-25 03:15:33.026543 | orchestrator | placement : include_tasks ----------------------------------------------- 0.64s 2026-03-25 03:15:35.627570 | orchestrator | 2026-03-25 03:15:35 | INFO  | Task 8c4c530d-8c48-4866-9cd4-83634e7421c9 (neutron) was prepared for execution. 2026-03-25 03:15:35.627673 | orchestrator | 2026-03-25 03:15:35 | INFO  | It takes a moment until task 8c4c530d-8c48-4866-9cd4-83634e7421c9 (neutron) has been started and output is visible here. 2026-03-25 03:16:24.864120 | orchestrator | 2026-03-25 03:16:24.864242 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:16:24.864251 | orchestrator | 2026-03-25 03:16:24.864257 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:16:24.864263 | orchestrator | Wednesday 25 March 2026 03:15:40 +0000 (0:00:00.305) 0:00:00.305 ******* 2026-03-25 03:16:24.864268 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:16:24.864275 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:16:24.864280 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:16:24.864285 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:16:24.864289 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:16:24.864294 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:16:24.864299 | orchestrator | 2026-03-25 03:16:24.864304 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:16:24.864309 | orchestrator | Wednesday 25 March 2026 03:15:41 +0000 (0:00:00.843) 0:00:01.149 ******* 2026-03-25 03:16:24.864314 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-25 03:16:24.864320 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-25 03:16:24.864325 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-25 03:16:24.864329 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-25 03:16:24.864334 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-25 03:16:24.864359 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-25 03:16:24.864364 | orchestrator | 2026-03-25 03:16:24.864369 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-25 03:16:24.864374 | orchestrator | 2026-03-25 03:16:24.864379 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-25 03:16:24.864384 | orchestrator | Wednesday 25 March 2026 03:15:42 +0000 (0:00:00.670) 0:00:01.819 ******* 2026-03-25 03:16:24.864405 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:16:24.864411 | orchestrator | 2026-03-25 03:16:24.864416 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-25 03:16:24.864421 | orchestrator | Wednesday 25 March 2026 03:15:43 +0000 (0:00:01.400) 0:00:03.220 ******* 2026-03-25 03:16:24.864426 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:16:24.864431 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:16:24.864436 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:16:24.864440 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:16:24.864446 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:16:24.864451 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:16:24.864455 | orchestrator | 2026-03-25 03:16:24.864460 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-25 03:16:24.864465 | orchestrator | Wednesday 25 March 2026 03:15:45 +0000 (0:00:01.573) 0:00:04.794 ******* 2026-03-25 03:16:24.864470 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:16:24.864475 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:16:24.864479 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:16:24.864484 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:16:24.864489 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:16:24.864493 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:16:24.864498 | orchestrator | 2026-03-25 03:16:24.864503 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-25 03:16:24.864508 | orchestrator | Wednesday 25 March 2026 03:15:46 +0000 (0:00:01.132) 0:00:05.926 ******* 2026-03-25 03:16:24.864513 | orchestrator | ok: [testbed-node-0] => { 2026-03-25 03:16:24.864519 | orchestrator |  "changed": false, 2026-03-25 03:16:24.864524 | orchestrator |  "msg": "All assertions passed" 2026-03-25 03:16:24.864529 | orchestrator | } 2026-03-25 03:16:24.864534 | orchestrator | ok: [testbed-node-1] => { 2026-03-25 03:16:24.864539 | orchestrator |  "changed": false, 2026-03-25 03:16:24.864544 | orchestrator |  "msg": "All assertions passed" 2026-03-25 03:16:24.864549 | orchestrator | } 2026-03-25 03:16:24.864554 | orchestrator | ok: [testbed-node-2] => { 2026-03-25 03:16:24.864558 | orchestrator |  "changed": false, 2026-03-25 03:16:24.864563 | orchestrator |  "msg": "All assertions passed" 2026-03-25 03:16:24.864568 | orchestrator | } 2026-03-25 03:16:24.864573 | orchestrator | ok: [testbed-node-3] => { 2026-03-25 03:16:24.864579 | orchestrator |  "changed": false, 2026-03-25 03:16:24.864584 | orchestrator |  "msg": "All assertions passed" 2026-03-25 03:16:24.864590 | orchestrator | } 2026-03-25 03:16:24.864595 | orchestrator | ok: [testbed-node-4] => { 2026-03-25 03:16:24.864601 | orchestrator |  "changed": false, 2026-03-25 03:16:24.864607 | orchestrator |  "msg": "All assertions passed" 2026-03-25 03:16:24.864612 | orchestrator | } 2026-03-25 03:16:24.864618 | orchestrator | ok: [testbed-node-5] => { 2026-03-25 03:16:24.864623 | orchestrator |  "changed": false, 2026-03-25 03:16:24.864629 | orchestrator |  "msg": "All assertions passed" 2026-03-25 03:16:24.864634 | orchestrator | } 2026-03-25 03:16:24.864639 | orchestrator | 2026-03-25 03:16:24.864645 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-25 03:16:24.864650 | orchestrator | Wednesday 25 March 2026 03:15:47 +0000 (0:00:00.947) 0:00:06.873 ******* 2026-03-25 03:16:24.864655 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:24.864661 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:24.864666 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:24.864679 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:24.864687 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:24.864696 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:24.864704 | orchestrator | 2026-03-25 03:16:24.864717 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-25 03:16:24.864727 | orchestrator | Wednesday 25 March 2026 03:15:48 +0000 (0:00:00.728) 0:00:07.602 ******* 2026-03-25 03:16:24.864735 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-25 03:16:24.864743 | orchestrator | 2026-03-25 03:16:24.864751 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-25 03:16:24.864759 | orchestrator | Wednesday 25 March 2026 03:15:51 +0000 (0:00:03.310) 0:00:10.913 ******* 2026-03-25 03:16:24.864768 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-25 03:16:24.864778 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-25 03:16:24.864787 | orchestrator | 2026-03-25 03:16:24.864815 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-25 03:16:24.864825 | orchestrator | Wednesday 25 March 2026 03:15:57 +0000 (0:00:06.337) 0:00:17.251 ******* 2026-03-25 03:16:24.864833 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:16:24.864842 | orchestrator | 2026-03-25 03:16:24.864849 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-25 03:16:24.864855 | orchestrator | Wednesday 25 March 2026 03:16:00 +0000 (0:00:02.995) 0:00:20.246 ******* 2026-03-25 03:16:24.864861 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:16:24.864867 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-25 03:16:24.864872 | orchestrator | 2026-03-25 03:16:24.864878 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-25 03:16:24.864883 | orchestrator | Wednesday 25 March 2026 03:16:04 +0000 (0:00:03.600) 0:00:23.846 ******* 2026-03-25 03:16:24.864888 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:16:24.864894 | orchestrator | 2026-03-25 03:16:24.864900 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-25 03:16:24.864905 | orchestrator | Wednesday 25 March 2026 03:16:07 +0000 (0:00:02.995) 0:00:26.842 ******* 2026-03-25 03:16:24.864910 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-25 03:16:24.864916 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-25 03:16:24.864922 | orchestrator | 2026-03-25 03:16:24.864927 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-25 03:16:24.864932 | orchestrator | Wednesday 25 March 2026 03:16:14 +0000 (0:00:07.458) 0:00:34.301 ******* 2026-03-25 03:16:24.864938 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:24.864944 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:24.864949 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:24.864982 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:24.864990 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:24.865004 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:24.865012 | orchestrator | 2026-03-25 03:16:24.865020 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-25 03:16:24.865028 | orchestrator | Wednesday 25 March 2026 03:16:15 +0000 (0:00:00.905) 0:00:35.206 ******* 2026-03-25 03:16:24.865036 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:24.865044 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:24.865049 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:24.865054 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:24.865058 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:24.865063 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:24.865067 | orchestrator | 2026-03-25 03:16:24.865072 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-25 03:16:24.865077 | orchestrator | Wednesday 25 March 2026 03:16:18 +0000 (0:00:02.548) 0:00:37.754 ******* 2026-03-25 03:16:24.865087 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:16:24.865092 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:16:24.865097 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:16:24.865102 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:16:24.865106 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:16:24.865111 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:16:24.865116 | orchestrator | 2026-03-25 03:16:24.865120 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-25 03:16:24.865125 | orchestrator | Wednesday 25 March 2026 03:16:19 +0000 (0:00:01.291) 0:00:39.045 ******* 2026-03-25 03:16:24.865130 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:24.865134 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:24.865139 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:24.865144 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:24.865148 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:24.865153 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:24.865158 | orchestrator | 2026-03-25 03:16:24.865162 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-25 03:16:24.865167 | orchestrator | Wednesday 25 March 2026 03:16:22 +0000 (0:00:02.646) 0:00:41.692 ******* 2026-03-25 03:16:24.865176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:24.865195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:30.732097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:30.732218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:30.732227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:30.732231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:30.732235 | orchestrator | 2026-03-25 03:16:30.732240 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-25 03:16:30.732245 | orchestrator | Wednesday 25 March 2026 03:16:24 +0000 (0:00:02.701) 0:00:44.393 ******* 2026-03-25 03:16:30.732249 | orchestrator | [WARNING]: Skipped 2026-03-25 03:16:30.732254 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-25 03:16:30.732259 | orchestrator | due to this access issue: 2026-03-25 03:16:30.732264 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-25 03:16:30.732268 | orchestrator | a directory 2026-03-25 03:16:30.732272 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:16:30.732276 | orchestrator | 2026-03-25 03:16:30.732280 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-25 03:16:30.732284 | orchestrator | Wednesday 25 March 2026 03:16:25 +0000 (0:00:00.893) 0:00:45.287 ******* 2026-03-25 03:16:30.732289 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:16:30.732294 | orchestrator | 2026-03-25 03:16:30.732298 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-25 03:16:30.732314 | orchestrator | Wednesday 25 March 2026 03:16:27 +0000 (0:00:01.461) 0:00:46.748 ******* 2026-03-25 03:16:30.732320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:30.732329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:30.732333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:30.732337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:30.732345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:36.434142 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:36.434252 | orchestrator | 2026-03-25 03:16:36.434271 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-25 03:16:36.434287 | orchestrator | Wednesday 25 March 2026 03:16:30 +0000 (0:00:03.513) 0:00:50.262 ******* 2026-03-25 03:16:36.434296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:36.434305 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:36.434313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:36.435116 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:36.435148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:36.435155 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:36.435205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:36.435213 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:36.435228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:36.435234 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:36.435241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:36.435247 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:36.435254 | orchestrator | 2026-03-25 03:16:36.435260 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-25 03:16:36.435267 | orchestrator | Wednesday 25 March 2026 03:16:33 +0000 (0:00:02.428) 0:00:52.690 ******* 2026-03-25 03:16:36.435273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:36.435280 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:36.435292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:42.983097 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:42.983196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:42.983206 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:42.983212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:42.983218 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:42.983222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:42.983227 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:42.983232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:42.983254 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:42.983258 | orchestrator | 2026-03-25 03:16:42.983264 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-25 03:16:42.983270 | orchestrator | Wednesday 25 March 2026 03:16:36 +0000 (0:00:03.272) 0:00:55.963 ******* 2026-03-25 03:16:42.983274 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:42.983278 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:42.983282 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:42.983287 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:42.983291 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:42.983295 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:42.983299 | orchestrator | 2026-03-25 03:16:42.983304 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-25 03:16:42.983308 | orchestrator | Wednesday 25 March 2026 03:16:39 +0000 (0:00:02.888) 0:00:58.851 ******* 2026-03-25 03:16:42.983312 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:42.983317 | orchestrator | 2026-03-25 03:16:42.983321 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-25 03:16:42.983336 | orchestrator | Wednesday 25 March 2026 03:16:39 +0000 (0:00:00.150) 0:00:59.001 ******* 2026-03-25 03:16:42.983341 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:42.983345 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:42.983350 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:42.983354 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:42.983358 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:42.983362 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:42.983367 | orchestrator | 2026-03-25 03:16:42.983371 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-25 03:16:42.983375 | orchestrator | Wednesday 25 March 2026 03:16:40 +0000 (0:00:00.691) 0:00:59.693 ******* 2026-03-25 03:16:42.983383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:42.983388 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:42.983393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:42.983401 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:42.983406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:42.983410 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:42.983415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:42.983419 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:42.983430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:52.521813 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:52.521924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:52.521941 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:52.521952 | orchestrator | 2026-03-25 03:16:52.522083 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-25 03:16:52.522098 | orchestrator | Wednesday 25 March 2026 03:16:42 +0000 (0:00:02.810) 0:01:02.503 ******* 2026-03-25 03:16:52.522109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:52.522152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:52.522163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:52.522225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:52.522249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:52.522278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:52.522296 | orchestrator | 2026-03-25 03:16:52.522313 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-25 03:16:52.522329 | orchestrator | Wednesday 25 March 2026 03:16:46 +0000 (0:00:03.272) 0:01:05.776 ******* 2026-03-25 03:16:52.522346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:52.522364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:52.522401 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:58.039771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:16:58.039923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:58.039944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:16:58.039954 | orchestrator | 2026-03-25 03:16:58.040046 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-25 03:16:58.040056 | orchestrator | Wednesday 25 March 2026 03:16:52 +0000 (0:00:06.273) 0:01:12.050 ******* 2026-03-25 03:16:58.040063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:58.040084 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:16:58.040110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:58.040125 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:16:58.040131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:16:58.040138 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:16:58.040144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:58.040151 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:58.040157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:58.040164 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:58.040175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:16:58.040181 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:58.040188 | orchestrator | 2026-03-25 03:16:58.040194 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-25 03:16:58.040205 | orchestrator | Wednesday 25 March 2026 03:16:55 +0000 (0:00:02.566) 0:01:14.616 ******* 2026-03-25 03:16:58.040212 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:16:58.040218 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:16:58.040224 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:16:58.040230 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:16:58.040236 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:16:58.040247 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:17:20.662150 | orchestrator | 2026-03-25 03:17:20.662254 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-25 03:17:20.662269 | orchestrator | Wednesday 25 March 2026 03:16:58 +0000 (0:00:02.951) 0:01:17.568 ******* 2026-03-25 03:17:20.662280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:17:20.662291 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:20.662299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:17:20.662305 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:20.662312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:17:20.662319 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:20.662327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:17:20.662390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:17:20.662399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:17:20.662406 | orchestrator | 2026-03-25 03:17:20.662412 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-25 03:17:20.662419 | orchestrator | Wednesday 25 March 2026 03:17:01 +0000 (0:00:03.878) 0:01:21.447 ******* 2026-03-25 03:17:20.662426 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:20.662432 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:20.662438 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:20.662444 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:20.662450 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:20.662457 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:20.662463 | orchestrator | 2026-03-25 03:17:20.662469 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-25 03:17:20.662476 | orchestrator | Wednesday 25 March 2026 03:17:04 +0000 (0:00:02.695) 0:01:24.142 ******* 2026-03-25 03:17:20.662482 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:20.662488 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:20.662495 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:20.662501 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:20.662507 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:20.662513 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:20.662520 | orchestrator | 2026-03-25 03:17:20.662526 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-25 03:17:20.662532 | orchestrator | Wednesday 25 March 2026 03:17:07 +0000 (0:00:02.557) 0:01:26.699 ******* 2026-03-25 03:17:20.662538 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:20.662545 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:20.662552 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:20.662558 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:20.662564 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:20.662570 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:20.662577 | orchestrator | 2026-03-25 03:17:20.662583 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-25 03:17:20.662598 | orchestrator | Wednesday 25 March 2026 03:17:10 +0000 (0:00:03.059) 0:01:29.758 ******* 2026-03-25 03:17:20.662605 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:20.662611 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:20.662617 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:20.662623 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:20.662629 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:20.662635 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:20.662640 | orchestrator | 2026-03-25 03:17:20.662646 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-25 03:17:20.662653 | orchestrator | Wednesday 25 March 2026 03:17:12 +0000 (0:00:02.370) 0:01:32.129 ******* 2026-03-25 03:17:20.662659 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:20.662665 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:20.662671 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:20.662677 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:20.662683 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:20.662689 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:20.662695 | orchestrator | 2026-03-25 03:17:20.662701 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-25 03:17:20.662707 | orchestrator | Wednesday 25 March 2026 03:17:15 +0000 (0:00:02.494) 0:01:34.623 ******* 2026-03-25 03:17:20.662713 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:20.662719 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:20.662725 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:20.662731 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:20.662743 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:20.662749 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:20.662754 | orchestrator | 2026-03-25 03:17:20.662760 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-25 03:17:20.662767 | orchestrator | Wednesday 25 March 2026 03:17:17 +0000 (0:00:02.500) 0:01:37.123 ******* 2026-03-25 03:17:20.662773 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-25 03:17:20.662781 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:20.662787 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-25 03:17:20.662794 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:20.662801 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-25 03:17:20.662814 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:25.948886 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-25 03:17:25.949030 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:25.949048 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-25 03:17:25.949066 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:25.949108 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-25 03:17:25.949122 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:25.949131 | orchestrator | 2026-03-25 03:17:25.949141 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-25 03:17:25.949151 | orchestrator | Wednesday 25 March 2026 03:17:20 +0000 (0:00:03.063) 0:01:40.187 ******* 2026-03-25 03:17:25.949162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:17:25.949191 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:25.949201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:17:25.949211 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:25.949220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:17:25.949228 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:25.949260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:17:25.949270 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:25.949279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:17:25.949294 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:25.949304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:17:25.949313 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:25.949322 | orchestrator | 2026-03-25 03:17:25.949331 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-25 03:17:25.949340 | orchestrator | Wednesday 25 March 2026 03:17:23 +0000 (0:00:02.693) 0:01:42.881 ******* 2026-03-25 03:17:25.949349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:17:25.949358 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:25.949371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:17:25.949381 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:25.949398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:17:57.042204 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:17:57.042334 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.042341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:17:57.042348 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.042360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:17:57.042367 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.042374 | orchestrator | 2026-03-25 03:17:57.042380 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-25 03:17:57.042389 | orchestrator | Wednesday 25 March 2026 03:17:25 +0000 (0:00:02.599) 0:01:45.481 ******* 2026-03-25 03:17:57.042394 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.042400 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.042406 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042412 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.042420 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.042428 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.042435 | orchestrator | 2026-03-25 03:17:57.042457 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-25 03:17:57.042463 | orchestrator | Wednesday 25 March 2026 03:17:28 +0000 (0:00:02.203) 0:01:47.684 ******* 2026-03-25 03:17:57.042470 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042477 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.042483 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.042490 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:17:57.042496 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:17:57.042503 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:17:57.042516 | orchestrator | 2026-03-25 03:17:57.042523 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-25 03:17:57.042549 | orchestrator | Wednesday 25 March 2026 03:17:32 +0000 (0:00:04.258) 0:01:51.943 ******* 2026-03-25 03:17:57.042556 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042563 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.042570 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.042577 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.042584 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.042591 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.042597 | orchestrator | 2026-03-25 03:17:57.042603 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-25 03:17:57.042610 | orchestrator | Wednesday 25 March 2026 03:17:35 +0000 (0:00:03.004) 0:01:54.948 ******* 2026-03-25 03:17:57.042617 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.042624 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.042630 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042637 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.042644 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.042650 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.042656 | orchestrator | 2026-03-25 03:17:57.042662 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-25 03:17:57.042685 | orchestrator | Wednesday 25 March 2026 03:17:38 +0000 (0:00:02.601) 0:01:57.549 ******* 2026-03-25 03:17:57.042693 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.042699 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042706 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.042712 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.042717 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.042724 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.042731 | orchestrator | 2026-03-25 03:17:57.042737 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-25 03:17:57.042745 | orchestrator | Wednesday 25 March 2026 03:17:40 +0000 (0:00:02.762) 0:02:00.312 ******* 2026-03-25 03:17:57.042751 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.042758 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042765 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.042772 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.042778 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.042785 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.042791 | orchestrator | 2026-03-25 03:17:57.042797 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-25 03:17:57.042803 | orchestrator | Wednesday 25 March 2026 03:17:43 +0000 (0:00:02.646) 0:02:02.958 ******* 2026-03-25 03:17:57.042810 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042816 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.042824 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.042830 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.042837 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.042844 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.042851 | orchestrator | 2026-03-25 03:17:57.042857 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-25 03:17:57.042864 | orchestrator | Wednesday 25 March 2026 03:17:46 +0000 (0:00:03.056) 0:02:06.015 ******* 2026-03-25 03:17:57.042870 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.042876 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.042883 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042891 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.042898 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.042906 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.042912 | orchestrator | 2026-03-25 03:17:57.042920 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-25 03:17:57.042926 | orchestrator | Wednesday 25 March 2026 03:17:49 +0000 (0:00:02.933) 0:02:08.949 ******* 2026-03-25 03:17:57.042933 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.042947 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.042955 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.042962 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.042992 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.043000 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.043007 | orchestrator | 2026-03-25 03:17:57.043014 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-25 03:17:57.043020 | orchestrator | Wednesday 25 March 2026 03:17:52 +0000 (0:00:02.805) 0:02:11.754 ******* 2026-03-25 03:17:57.043026 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-25 03:17:57.043034 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:17:57.043041 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-25 03:17:57.043048 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:17:57.043054 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-25 03:17:57.043061 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.043068 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-25 03:17:57.043074 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:17:57.043081 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-25 03:17:57.043087 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:17:57.043094 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-25 03:17:57.043107 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:17:57.043114 | orchestrator | 2026-03-25 03:17:57.043120 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-25 03:17:57.043127 | orchestrator | Wednesday 25 March 2026 03:17:54 +0000 (0:00:02.081) 0:02:13.836 ******* 2026-03-25 03:17:57.043136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:17:57.043144 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:17:57.043162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:18:00.000724 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:18:00.000875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-25 03:18:00.000897 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:18:00.000910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:18:00.000923 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:18:00.000951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:18:00.000963 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:18:00.001042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 03:18:00.001055 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:18:00.001067 | orchestrator | 2026-03-25 03:18:00.001080 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-25 03:18:00.001092 | orchestrator | Wednesday 25 March 2026 03:17:57 +0000 (0:00:02.723) 0:02:16.559 ******* 2026-03-25 03:18:00.001125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:18:00.001148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:18:00.001179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-25 03:18:00.001191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:18:00.001203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:18:00.001229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-25 03:20:12.589581 | orchestrator | 2026-03-25 03:20:12.589689 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-25 03:20:12.589701 | orchestrator | Wednesday 25 March 2026 03:17:59 +0000 (0:00:02.968) 0:02:19.528 ******* 2026-03-25 03:20:12.589708 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:20:12.589717 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:20:12.589723 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:20:12.589731 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:20:12.589738 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:20:12.589745 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:20:12.589752 | orchestrator | 2026-03-25 03:20:12.589759 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-25 03:20:12.589766 | orchestrator | Wednesday 25 March 2026 03:18:00 +0000 (0:00:00.874) 0:02:20.403 ******* 2026-03-25 03:20:12.589773 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:20:12.589780 | orchestrator | 2026-03-25 03:20:12.589787 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-25 03:20:12.589794 | orchestrator | Wednesday 25 March 2026 03:18:02 +0000 (0:00:02.047) 0:02:22.450 ******* 2026-03-25 03:20:12.589801 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:20:12.589808 | orchestrator | 2026-03-25 03:20:12.589815 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-25 03:20:12.589822 | orchestrator | Wednesday 25 March 2026 03:18:05 +0000 (0:00:02.260) 0:02:24.711 ******* 2026-03-25 03:20:12.589828 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:20:12.589834 | orchestrator | 2026-03-25 03:20:12.589841 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-25 03:20:12.589848 | orchestrator | Wednesday 25 March 2026 03:18:44 +0000 (0:00:39.777) 0:03:04.488 ******* 2026-03-25 03:20:12.589855 | orchestrator | 2026-03-25 03:20:12.589861 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-25 03:20:12.589869 | orchestrator | Wednesday 25 March 2026 03:18:45 +0000 (0:00:00.081) 0:03:04.570 ******* 2026-03-25 03:20:12.589873 | orchestrator | 2026-03-25 03:20:12.589877 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-25 03:20:12.589881 | orchestrator | Wednesday 25 March 2026 03:18:45 +0000 (0:00:00.077) 0:03:04.648 ******* 2026-03-25 03:20:12.589884 | orchestrator | 2026-03-25 03:20:12.589889 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-25 03:20:12.589895 | orchestrator | Wednesday 25 March 2026 03:18:45 +0000 (0:00:00.078) 0:03:04.727 ******* 2026-03-25 03:20:12.589901 | orchestrator | 2026-03-25 03:20:12.589923 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-25 03:20:12.589930 | orchestrator | Wednesday 25 March 2026 03:18:45 +0000 (0:00:00.094) 0:03:04.821 ******* 2026-03-25 03:20:12.589936 | orchestrator | 2026-03-25 03:20:12.589943 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-25 03:20:12.589949 | orchestrator | Wednesday 25 March 2026 03:18:45 +0000 (0:00:00.078) 0:03:04.899 ******* 2026-03-25 03:20:12.589956 | orchestrator | 2026-03-25 03:20:12.589962 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-25 03:20:12.589969 | orchestrator | Wednesday 25 March 2026 03:18:45 +0000 (0:00:00.087) 0:03:04.987 ******* 2026-03-25 03:20:12.590118 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:20:12.590131 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:20:12.590138 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:20:12.590144 | orchestrator | 2026-03-25 03:20:12.590152 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-25 03:20:12.590159 | orchestrator | Wednesday 25 March 2026 03:19:09 +0000 (0:00:24.032) 0:03:29.019 ******* 2026-03-25 03:20:12.590166 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:20:12.590173 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:20:12.590180 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:20:12.590186 | orchestrator | 2026-03-25 03:20:12.590193 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:20:12.590202 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-25 03:20:12.590212 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-25 03:20:12.590220 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-25 03:20:12.590226 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-25 03:20:12.590233 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-25 03:20:12.590240 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-25 03:20:12.590247 | orchestrator | 2026-03-25 03:20:12.590254 | orchestrator | 2026-03-25 03:20:12.590260 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:20:12.590267 | orchestrator | Wednesday 25 March 2026 03:20:11 +0000 (0:01:02.495) 0:04:31.515 ******* 2026-03-25 03:20:12.590274 | orchestrator | =============================================================================== 2026-03-25 03:20:12.590280 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.50s 2026-03-25 03:20:12.590287 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.78s 2026-03-25 03:20:12.590293 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.03s 2026-03-25 03:20:12.590318 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.46s 2026-03-25 03:20:12.590324 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.34s 2026-03-25 03:20:12.590331 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.27s 2026-03-25 03:20:12.590337 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.26s 2026-03-25 03:20:12.590343 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.88s 2026-03-25 03:20:12.590350 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.60s 2026-03-25 03:20:12.590357 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.51s 2026-03-25 03:20:12.590364 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.31s 2026-03-25 03:20:12.590371 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.27s 2026-03-25 03:20:12.590377 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.27s 2026-03-25 03:20:12.590385 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.06s 2026-03-25 03:20:12.590391 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.06s 2026-03-25 03:20:12.590398 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.06s 2026-03-25 03:20:12.590413 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.00s 2026-03-25 03:20:12.590420 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.00s 2026-03-25 03:20:12.590426 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.00s 2026-03-25 03:20:12.590433 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.97s 2026-03-25 03:20:15.289171 | orchestrator | 2026-03-25 03:20:15 | INFO  | Task 15896056-1cbe-40f5-9bb4-d95316e0f9c0 (nova) was prepared for execution. 2026-03-25 03:20:15.289265 | orchestrator | 2026-03-25 03:20:15 | INFO  | It takes a moment until task 15896056-1cbe-40f5-9bb4-d95316e0f9c0 (nova) has been started and output is visible here. 2026-03-25 03:22:08.653779 | orchestrator | 2026-03-25 03:22:08.653925 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:22:08.653937 | orchestrator | 2026-03-25 03:22:08.653943 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-25 03:22:08.653949 | orchestrator | Wednesday 25 March 2026 03:20:20 +0000 (0:00:00.337) 0:00:00.337 ******* 2026-03-25 03:22:08.653954 | orchestrator | changed: [testbed-manager] 2026-03-25 03:22:08.653959 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.653964 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:22:08.653968 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:22:08.653973 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:22:08.653977 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:22:08.653982 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:22:08.653986 | orchestrator | 2026-03-25 03:22:08.653991 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:22:08.653995 | orchestrator | Wednesday 25 March 2026 03:20:21 +0000 (0:00:00.949) 0:00:01.287 ******* 2026-03-25 03:22:08.654000 | orchestrator | changed: [testbed-manager] 2026-03-25 03:22:08.654004 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654009 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:22:08.654070 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:22:08.654076 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:22:08.654081 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:22:08.654086 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:22:08.654090 | orchestrator | 2026-03-25 03:22:08.654095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:22:08.654100 | orchestrator | Wednesday 25 March 2026 03:20:22 +0000 (0:00:01.044) 0:00:02.332 ******* 2026-03-25 03:22:08.654105 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-25 03:22:08.654110 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-25 03:22:08.654115 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-25 03:22:08.654119 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-25 03:22:08.654124 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-25 03:22:08.654128 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-25 03:22:08.654133 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-25 03:22:08.654137 | orchestrator | 2026-03-25 03:22:08.654142 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-25 03:22:08.654147 | orchestrator | 2026-03-25 03:22:08.654151 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-25 03:22:08.654156 | orchestrator | Wednesday 25 March 2026 03:20:23 +0000 (0:00:00.799) 0:00:03.132 ******* 2026-03-25 03:22:08.654161 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:22:08.654165 | orchestrator | 2026-03-25 03:22:08.654170 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-25 03:22:08.654174 | orchestrator | Wednesday 25 March 2026 03:20:24 +0000 (0:00:00.881) 0:00:04.013 ******* 2026-03-25 03:22:08.654180 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-25 03:22:08.654201 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-25 03:22:08.654206 | orchestrator | 2026-03-25 03:22:08.654211 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-25 03:22:08.654215 | orchestrator | Wednesday 25 March 2026 03:20:27 +0000 (0:00:03.807) 0:00:07.821 ******* 2026-03-25 03:22:08.654220 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-25 03:22:08.654225 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-25 03:22:08.654229 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654234 | orchestrator | 2026-03-25 03:22:08.654238 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-25 03:22:08.654243 | orchestrator | Wednesday 25 March 2026 03:20:31 +0000 (0:00:03.878) 0:00:11.700 ******* 2026-03-25 03:22:08.654248 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654252 | orchestrator | 2026-03-25 03:22:08.654257 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-25 03:22:08.654262 | orchestrator | Wednesday 25 March 2026 03:20:32 +0000 (0:00:00.667) 0:00:12.368 ******* 2026-03-25 03:22:08.654266 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654270 | orchestrator | 2026-03-25 03:22:08.654275 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-25 03:22:08.654279 | orchestrator | Wednesday 25 March 2026 03:20:33 +0000 (0:00:01.261) 0:00:13.629 ******* 2026-03-25 03:22:08.654284 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654288 | orchestrator | 2026-03-25 03:22:08.654293 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-25 03:22:08.654297 | orchestrator | Wednesday 25 March 2026 03:20:36 +0000 (0:00:02.746) 0:00:16.376 ******* 2026-03-25 03:22:08.654302 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:22:08.654306 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654310 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654315 | orchestrator | 2026-03-25 03:22:08.654319 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-25 03:22:08.654324 | orchestrator | Wednesday 25 March 2026 03:20:36 +0000 (0:00:00.342) 0:00:16.718 ******* 2026-03-25 03:22:08.654328 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:22:08.654333 | orchestrator | 2026-03-25 03:22:08.654338 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-25 03:22:08.654343 | orchestrator | Wednesday 25 March 2026 03:21:06 +0000 (0:00:29.833) 0:00:46.551 ******* 2026-03-25 03:22:08.654349 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654354 | orchestrator | 2026-03-25 03:22:08.654359 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-25 03:22:08.654365 | orchestrator | Wednesday 25 March 2026 03:21:19 +0000 (0:00:12.991) 0:00:59.543 ******* 2026-03-25 03:22:08.654370 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:22:08.654375 | orchestrator | 2026-03-25 03:22:08.654380 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-25 03:22:08.654385 | orchestrator | Wednesday 25 March 2026 03:21:30 +0000 (0:00:11.220) 0:01:10.763 ******* 2026-03-25 03:22:08.654404 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:22:08.654410 | orchestrator | 2026-03-25 03:22:08.654419 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-25 03:22:08.654425 | orchestrator | Wednesday 25 March 2026 03:21:31 +0000 (0:00:00.783) 0:01:11.547 ******* 2026-03-25 03:22:08.654430 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:22:08.654435 | orchestrator | 2026-03-25 03:22:08.654440 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-25 03:22:08.654446 | orchestrator | Wednesday 25 March 2026 03:21:32 +0000 (0:00:00.504) 0:01:12.052 ******* 2026-03-25 03:22:08.654452 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:22:08.654457 | orchestrator | 2026-03-25 03:22:08.654463 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-25 03:22:08.654473 | orchestrator | Wednesday 25 March 2026 03:21:32 +0000 (0:00:00.791) 0:01:12.843 ******* 2026-03-25 03:22:08.654477 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:22:08.654482 | orchestrator | 2026-03-25 03:22:08.654486 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-25 03:22:08.654491 | orchestrator | Wednesday 25 March 2026 03:21:49 +0000 (0:00:16.948) 0:01:29.791 ******* 2026-03-25 03:22:08.654495 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:22:08.654500 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654504 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654509 | orchestrator | 2026-03-25 03:22:08.654513 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-25 03:22:08.654518 | orchestrator | 2026-03-25 03:22:08.654522 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-25 03:22:08.654527 | orchestrator | Wednesday 25 March 2026 03:21:50 +0000 (0:00:00.346) 0:01:30.138 ******* 2026-03-25 03:22:08.654531 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:22:08.654536 | orchestrator | 2026-03-25 03:22:08.654540 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-25 03:22:08.654545 | orchestrator | Wednesday 25 March 2026 03:21:51 +0000 (0:00:00.944) 0:01:31.082 ******* 2026-03-25 03:22:08.654549 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654553 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654558 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654562 | orchestrator | 2026-03-25 03:22:08.654567 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-25 03:22:08.654571 | orchestrator | Wednesday 25 March 2026 03:21:53 +0000 (0:00:02.025) 0:01:33.108 ******* 2026-03-25 03:22:08.654576 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654580 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654584 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654589 | orchestrator | 2026-03-25 03:22:08.654593 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-25 03:22:08.654598 | orchestrator | Wednesday 25 March 2026 03:21:55 +0000 (0:00:01.967) 0:01:35.075 ******* 2026-03-25 03:22:08.654602 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:22:08.654607 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654611 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654616 | orchestrator | 2026-03-25 03:22:08.654620 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-25 03:22:08.654625 | orchestrator | Wednesday 25 March 2026 03:21:55 +0000 (0:00:00.625) 0:01:35.700 ******* 2026-03-25 03:22:08.654629 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-25 03:22:08.654633 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654638 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-25 03:22:08.654642 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654647 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-25 03:22:08.654652 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-25 03:22:08.654656 | orchestrator | 2026-03-25 03:22:08.654660 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-25 03:22:08.654665 | orchestrator | Wednesday 25 March 2026 03:22:02 +0000 (0:00:07.097) 0:01:42.798 ******* 2026-03-25 03:22:08.654669 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:22:08.654674 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654678 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654683 | orchestrator | 2026-03-25 03:22:08.654687 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-25 03:22:08.654692 | orchestrator | Wednesday 25 March 2026 03:22:03 +0000 (0:00:00.371) 0:01:43.170 ******* 2026-03-25 03:22:08.654696 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-25 03:22:08.654701 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:22:08.654705 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-25 03:22:08.654714 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654718 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-25 03:22:08.654722 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654727 | orchestrator | 2026-03-25 03:22:08.654731 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-25 03:22:08.654736 | orchestrator | Wednesday 25 March 2026 03:22:04 +0000 (0:00:01.241) 0:01:44.411 ******* 2026-03-25 03:22:08.654740 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654745 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654749 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654754 | orchestrator | 2026-03-25 03:22:08.654758 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-25 03:22:08.654763 | orchestrator | Wednesday 25 March 2026 03:22:04 +0000 (0:00:00.518) 0:01:44.930 ******* 2026-03-25 03:22:08.654767 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654772 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654776 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:22:08.654781 | orchestrator | 2026-03-25 03:22:08.654785 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-25 03:22:08.654789 | orchestrator | Wednesday 25 March 2026 03:22:05 +0000 (0:00:00.980) 0:01:45.910 ******* 2026-03-25 03:22:08.654795 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:22:08.654802 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:22:08.654813 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:23:22.086432 | orchestrator | 2026-03-25 03:23:22.086547 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-25 03:23:22.086561 | orchestrator | Wednesday 25 March 2026 03:22:08 +0000 (0:00:02.702) 0:01:48.612 ******* 2026-03-25 03:23:22.086568 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:22.086575 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:22.086582 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:23:22.086590 | orchestrator | 2026-03-25 03:23:22.086597 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-25 03:23:22.086603 | orchestrator | Wednesday 25 March 2026 03:22:28 +0000 (0:00:19.515) 0:02:08.128 ******* 2026-03-25 03:23:22.086611 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:22.086615 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:22.086619 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:23:22.086623 | orchestrator | 2026-03-25 03:23:22.086627 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-25 03:23:22.086631 | orchestrator | Wednesday 25 March 2026 03:22:39 +0000 (0:00:11.234) 0:02:19.363 ******* 2026-03-25 03:23:22.086635 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:23:22.086639 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:22.086643 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:22.086647 | orchestrator | 2026-03-25 03:23:22.086650 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-25 03:23:22.086654 | orchestrator | Wednesday 25 March 2026 03:22:40 +0000 (0:00:01.317) 0:02:20.681 ******* 2026-03-25 03:23:22.086658 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:22.086662 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:22.086667 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:23:22.086673 | orchestrator | 2026-03-25 03:23:22.086679 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-25 03:23:22.086685 | orchestrator | Wednesday 25 March 2026 03:22:52 +0000 (0:00:11.413) 0:02:32.094 ******* 2026-03-25 03:23:22.086695 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:23:22.086704 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:22.086709 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:22.086715 | orchestrator | 2026-03-25 03:23:22.086768 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-25 03:23:22.086777 | orchestrator | Wednesday 25 March 2026 03:22:53 +0000 (0:00:01.324) 0:02:33.419 ******* 2026-03-25 03:23:22.086805 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:23:22.086810 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:22.086816 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:22.086822 | orchestrator | 2026-03-25 03:23:22.086828 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-25 03:23:22.086834 | orchestrator | 2026-03-25 03:23:22.086839 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-25 03:23:22.086845 | orchestrator | Wednesday 25 March 2026 03:22:53 +0000 (0:00:00.410) 0:02:33.829 ******* 2026-03-25 03:23:22.086892 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:23:22.086899 | orchestrator | 2026-03-25 03:23:22.086904 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-25 03:23:22.086908 | orchestrator | Wednesday 25 March 2026 03:22:54 +0000 (0:00:00.861) 0:02:34.690 ******* 2026-03-25 03:23:22.086912 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-25 03:23:22.086916 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-25 03:23:22.086920 | orchestrator | 2026-03-25 03:23:22.086924 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-25 03:23:22.086928 | orchestrator | Wednesday 25 March 2026 03:22:57 +0000 (0:00:02.993) 0:02:37.684 ******* 2026-03-25 03:23:22.086932 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-25 03:23:22.086939 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-25 03:23:22.086943 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-25 03:23:22.086947 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-25 03:23:22.086951 | orchestrator | 2026-03-25 03:23:22.086955 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-25 03:23:22.086959 | orchestrator | Wednesday 25 March 2026 03:23:03 +0000 (0:00:06.012) 0:02:43.697 ******* 2026-03-25 03:23:22.086963 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:23:22.086969 | orchestrator | 2026-03-25 03:23:22.086976 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-25 03:23:22.086981 | orchestrator | Wednesday 25 March 2026 03:23:06 +0000 (0:00:03.212) 0:02:46.909 ******* 2026-03-25 03:23:22.086987 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:23:22.086993 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-25 03:23:22.086999 | orchestrator | 2026-03-25 03:23:22.087005 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-25 03:23:22.087011 | orchestrator | Wednesday 25 March 2026 03:23:10 +0000 (0:00:03.625) 0:02:50.535 ******* 2026-03-25 03:23:22.087018 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:23:22.087025 | orchestrator | 2026-03-25 03:23:22.087032 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-25 03:23:22.087039 | orchestrator | Wednesday 25 March 2026 03:23:13 +0000 (0:00:02.975) 0:02:53.511 ******* 2026-03-25 03:23:22.087045 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-25 03:23:22.087051 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-25 03:23:22.087058 | orchestrator | 2026-03-25 03:23:22.087063 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-25 03:23:22.087088 | orchestrator | Wednesday 25 March 2026 03:23:20 +0000 (0:00:07.164) 0:03:00.675 ******* 2026-03-25 03:23:22.087097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:22.087115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:22.087120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:22.087134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:26.869071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:26.869151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:26.869160 | orchestrator | 2026-03-25 03:23:26.869167 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-25 03:23:26.869173 | orchestrator | Wednesday 25 March 2026 03:23:22 +0000 (0:00:01.371) 0:03:02.046 ******* 2026-03-25 03:23:26.869178 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:23:26.869184 | orchestrator | 2026-03-25 03:23:26.869189 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-25 03:23:26.869194 | orchestrator | Wednesday 25 March 2026 03:23:22 +0000 (0:00:00.156) 0:03:02.202 ******* 2026-03-25 03:23:26.869199 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:23:26.869208 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:26.869216 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:26.869224 | orchestrator | 2026-03-25 03:23:26.869232 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-25 03:23:26.869241 | orchestrator | Wednesday 25 March 2026 03:23:22 +0000 (0:00:00.354) 0:03:02.557 ******* 2026-03-25 03:23:26.869249 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:23:26.869257 | orchestrator | 2026-03-25 03:23:26.869265 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-25 03:23:26.869274 | orchestrator | Wednesday 25 March 2026 03:23:23 +0000 (0:00:00.767) 0:03:03.324 ******* 2026-03-25 03:23:26.869281 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:23:26.869290 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:26.869295 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:26.869300 | orchestrator | 2026-03-25 03:23:26.869304 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-25 03:23:26.869309 | orchestrator | Wednesday 25 March 2026 03:23:23 +0000 (0:00:00.593) 0:03:03.918 ******* 2026-03-25 03:23:26.869315 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:23:26.869321 | orchestrator | 2026-03-25 03:23:26.869326 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-25 03:23:26.869331 | orchestrator | Wednesday 25 March 2026 03:23:24 +0000 (0:00:00.690) 0:03:04.609 ******* 2026-03-25 03:23:26.869353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:26.869393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:26.869400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:26.869405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:26.869410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:26.869423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:26.869429 | orchestrator | 2026-03-25 03:23:26.869438 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-25 03:23:28.781796 | orchestrator | Wednesday 25 March 2026 03:23:26 +0000 (0:00:02.219) 0:03:06.829 ******* 2026-03-25 03:23:28.781930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 03:23:28.781963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:23:28.781984 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:23:28.782003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 03:23:28.782127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:23:28.782157 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:28.782194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 03:23:28.782206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:23:28.782216 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:28.782226 | orchestrator | 2026-03-25 03:23:28.782237 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-25 03:23:28.782248 | orchestrator | Wednesday 25 March 2026 03:23:27 +0000 (0:00:01.014) 0:03:07.843 ******* 2026-03-25 03:23:28.782260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 03:23:28.782282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:23:28.782295 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:23:28.782321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 03:23:31.080740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:23:31.080819 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:31.080830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 03:23:31.080857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:23:31.080862 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:31.080868 | orchestrator | 2026-03-25 03:23:31.080875 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-25 03:23:31.080881 | orchestrator | Wednesday 25 March 2026 03:23:28 +0000 (0:00:00.903) 0:03:08.747 ******* 2026-03-25 03:23:31.080897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:31.080917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:31.080923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:31.080934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:31.080943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:31.080952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:38.238951 | orchestrator | 2026-03-25 03:23:38.239051 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-25 03:23:38.239063 | orchestrator | Wednesday 25 March 2026 03:23:31 +0000 (0:00:02.295) 0:03:11.043 ******* 2026-03-25 03:23:38.239074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:38.239105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:38.239130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:38.239152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:38.239161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:38.239173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:38.239179 | orchestrator | 2026-03-25 03:23:38.239185 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-25 03:23:38.239190 | orchestrator | Wednesday 25 March 2026 03:23:37 +0000 (0:00:06.447) 0:03:17.490 ******* 2026-03-25 03:23:38.239201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 03:23:38.239207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:23:38.239214 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:23:38.239228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 03:23:42.695227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:23:42.695355 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:42.695376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-25 03:23:42.695409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:23:42.695422 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:42.695434 | orchestrator | 2026-03-25 03:23:42.695447 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-25 03:23:42.695461 | orchestrator | Wednesday 25 March 2026 03:23:38 +0000 (0:00:00.712) 0:03:18.203 ******* 2026-03-25 03:23:42.695472 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:23:42.695483 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:23:42.695494 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:23:42.695505 | orchestrator | 2026-03-25 03:23:42.695516 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-25 03:23:42.695527 | orchestrator | Wednesday 25 March 2026 03:23:39 +0000 (0:00:01.562) 0:03:19.765 ******* 2026-03-25 03:23:42.695538 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:23:42.695549 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:23:42.695560 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:23:42.695571 | orchestrator | 2026-03-25 03:23:42.695582 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-25 03:23:42.695593 | orchestrator | Wednesday 25 March 2026 03:23:40 +0000 (0:00:00.348) 0:03:20.113 ******* 2026-03-25 03:23:42.695626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:42.695665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:42.695686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-25 03:23:42.695714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:42.695735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:23:42.695755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:17.919185 | orchestrator | 2026-03-25 03:24:17.919288 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-25 03:24:17.919301 | orchestrator | Wednesday 25 March 2026 03:23:42 +0000 (0:00:02.054) 0:03:22.168 ******* 2026-03-25 03:24:17.919309 | orchestrator | 2026-03-25 03:24:17.919316 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-25 03:24:17.919324 | orchestrator | Wednesday 25 March 2026 03:23:42 +0000 (0:00:00.163) 0:03:22.332 ******* 2026-03-25 03:24:17.919331 | orchestrator | 2026-03-25 03:24:17.919339 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-25 03:24:17.919347 | orchestrator | Wednesday 25 March 2026 03:23:42 +0000 (0:00:00.171) 0:03:22.503 ******* 2026-03-25 03:24:17.919354 | orchestrator | 2026-03-25 03:24:17.919362 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-25 03:24:17.919369 | orchestrator | Wednesday 25 March 2026 03:23:42 +0000 (0:00:00.149) 0:03:22.653 ******* 2026-03-25 03:24:17.919376 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:24:17.919385 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:24:17.919392 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:24:17.919399 | orchestrator | 2026-03-25 03:24:17.919407 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-25 03:24:17.919420 | orchestrator | Wednesday 25 March 2026 03:24:00 +0000 (0:00:17.341) 0:03:39.995 ******* 2026-03-25 03:24:17.919437 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:24:17.919452 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:24:17.919463 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:24:17.919474 | orchestrator | 2026-03-25 03:24:17.919486 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-25 03:24:17.919498 | orchestrator | 2026-03-25 03:24:17.919508 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-25 03:24:17.919520 | orchestrator | Wednesday 25 March 2026 03:24:05 +0000 (0:00:05.526) 0:03:45.521 ******* 2026-03-25 03:24:17.919534 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:24:17.919546 | orchestrator | 2026-03-25 03:24:17.919560 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-25 03:24:17.919590 | orchestrator | Wednesday 25 March 2026 03:24:06 +0000 (0:00:01.435) 0:03:46.956 ******* 2026-03-25 03:24:17.919603 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:24:17.919612 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:24:17.919619 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:24:17.919647 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:17.919684 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:17.919692 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:17.919699 | orchestrator | 2026-03-25 03:24:17.919706 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-25 03:24:17.919713 | orchestrator | Wednesday 25 March 2026 03:24:07 +0000 (0:00:00.871) 0:03:47.828 ******* 2026-03-25 03:24:17.919720 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:17.919727 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:17.919734 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:17.919742 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:24:17.919750 | orchestrator | 2026-03-25 03:24:17.919757 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-25 03:24:17.919764 | orchestrator | Wednesday 25 March 2026 03:24:08 +0000 (0:00:00.930) 0:03:48.759 ******* 2026-03-25 03:24:17.919772 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-25 03:24:17.919788 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-25 03:24:17.919795 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-25 03:24:17.919802 | orchestrator | 2026-03-25 03:24:17.919809 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-25 03:24:17.919816 | orchestrator | Wednesday 25 March 2026 03:24:09 +0000 (0:00:00.915) 0:03:49.674 ******* 2026-03-25 03:24:17.919824 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-25 03:24:17.919831 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-25 03:24:17.919838 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-25 03:24:17.919862 | orchestrator | 2026-03-25 03:24:17.919875 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-25 03:24:17.919886 | orchestrator | Wednesday 25 March 2026 03:24:10 +0000 (0:00:01.168) 0:03:50.843 ******* 2026-03-25 03:24:17.919898 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-25 03:24:17.919907 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:24:17.919914 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-25 03:24:17.919921 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:24:17.919929 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-25 03:24:17.919936 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:24:17.919943 | orchestrator | 2026-03-25 03:24:17.919950 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-25 03:24:17.919957 | orchestrator | Wednesday 25 March 2026 03:24:11 +0000 (0:00:00.575) 0:03:51.419 ******* 2026-03-25 03:24:17.919964 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-25 03:24:17.919971 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-25 03:24:17.919978 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 03:24:17.919985 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 03:24:17.919992 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:17.919999 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 03:24:17.920006 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 03:24:17.920029 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-25 03:24:17.920037 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:17.920044 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-25 03:24:17.920051 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-25 03:24:17.920059 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 03:24:17.920066 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 03:24:17.920083 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:17.920091 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-25 03:24:17.920098 | orchestrator | 2026-03-25 03:24:17.920105 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-25 03:24:17.920112 | orchestrator | Wednesday 25 March 2026 03:24:12 +0000 (0:00:01.294) 0:03:52.713 ******* 2026-03-25 03:24:17.920120 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:17.920127 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:17.920134 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:17.920141 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:24:17.920148 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:24:17.920156 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:24:17.920168 | orchestrator | 2026-03-25 03:24:17.920188 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-25 03:24:17.920200 | orchestrator | Wednesday 25 March 2026 03:24:13 +0000 (0:00:01.124) 0:03:53.838 ******* 2026-03-25 03:24:17.920211 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:17.920223 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:17.920235 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:17.920246 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:24:17.920258 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:24:17.920268 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:24:17.920281 | orchestrator | 2026-03-25 03:24:17.920292 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-25 03:24:17.920304 | orchestrator | Wednesday 25 March 2026 03:24:15 +0000 (0:00:02.073) 0:03:55.911 ******* 2026-03-25 03:24:17.920327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:24:17.920348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:24:17.920372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000336 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:20.000427 | orchestrator | 2026-03-25 03:24:20.000433 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-25 03:24:20.000439 | orchestrator | Wednesday 25 March 2026 03:24:18 +0000 (0:00:02.539) 0:03:58.450 ******* 2026-03-25 03:24:20.000444 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:24:20.000451 | orchestrator | 2026-03-25 03:24:20.000456 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-25 03:24:20.000464 | orchestrator | Wednesday 25 March 2026 03:24:19 +0000 (0:00:01.513) 0:03:59.964 ******* 2026-03-25 03:24:23.446195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446432 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:23.446487 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:25.604898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:25.604970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:25.604989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:24:25.604994 | orchestrator | 2026-03-25 03:24:25.604999 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-25 03:24:25.605004 | orchestrator | Wednesday 25 March 2026 03:24:23 +0000 (0:00:03.898) 0:04:03.862 ******* 2026-03-25 03:24:25.605009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:24:25.605030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:24:25.605046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-25 03:24:25.605051 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:24:25.605060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:24:25.605064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:24:25.605068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-25 03:24:25.605076 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:24:25.605080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:24:25.605087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:24:27.900347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-25 03:24:27.900430 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:24:27.900455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-25 03:24:27.900464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-25 03:24:27.900488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 03:24:27.900495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 03:24:27.900501 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:27.900506 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:27.900512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-25 03:24:27.900530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 03:24:27.900537 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:27.900542 | orchestrator | 2026-03-25 03:24:27.900548 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-25 03:24:27.900556 | orchestrator | Wednesday 25 March 2026 03:24:25 +0000 (0:00:01.986) 0:04:05.848 ******* 2026-03-25 03:24:27.900566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:24:27.900579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:24:27.900586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-25 03:24:27.900592 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:24:27.900598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:24:27.900610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:24:32.953440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-25 03:24:32.953519 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:24:32.953541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:24:32.953547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:24:32.953552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-25 03:24:32.953556 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:24:32.953561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-25 03:24:32.953579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 03:24:32.953583 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:32.953590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-25 03:24:32.953598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 03:24:32.953602 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:32.953606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-25 03:24:32.953610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 03:24:32.953614 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:32.953618 | orchestrator | 2026-03-25 03:24:32.953623 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-25 03:24:32.953628 | orchestrator | Wednesday 25 March 2026 03:24:28 +0000 (0:00:02.865) 0:04:08.714 ******* 2026-03-25 03:24:32.953632 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:32.953655 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:32.953659 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:32.953663 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:24:32.953667 | orchestrator | 2026-03-25 03:24:32.953671 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-25 03:24:32.953675 | orchestrator | Wednesday 25 March 2026 03:24:29 +0000 (0:00:01.089) 0:04:09.803 ******* 2026-03-25 03:24:32.953679 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 03:24:32.953683 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 03:24:32.953686 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 03:24:32.953691 | orchestrator | 2026-03-25 03:24:32.953694 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-25 03:24:32.953698 | orchestrator | Wednesday 25 March 2026 03:24:31 +0000 (0:00:01.477) 0:04:11.281 ******* 2026-03-25 03:24:32.953702 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 03:24:32.953705 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 03:24:32.953709 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 03:24:32.953713 | orchestrator | 2026-03-25 03:24:32.953716 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-25 03:24:32.953720 | orchestrator | Wednesday 25 March 2026 03:24:32 +0000 (0:00:01.049) 0:04:12.331 ******* 2026-03-25 03:24:32.953728 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:24:32.953733 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:24:32.953737 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:24:32.953741 | orchestrator | 2026-03-25 03:24:32.953748 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-25 03:24:55.302179 | orchestrator | Wednesday 25 March 2026 03:24:32 +0000 (0:00:00.587) 0:04:12.918 ******* 2026-03-25 03:24:55.302264 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:24:55.302273 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:24:55.302279 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:24:55.302284 | orchestrator | 2026-03-25 03:24:55.302290 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-25 03:24:55.302296 | orchestrator | Wednesday 25 March 2026 03:24:33 +0000 (0:00:00.617) 0:04:13.536 ******* 2026-03-25 03:24:55.302301 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-25 03:24:55.302307 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-25 03:24:55.302312 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-25 03:24:55.302316 | orchestrator | 2026-03-25 03:24:55.302322 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-25 03:24:55.302326 | orchestrator | Wednesday 25 March 2026 03:24:35 +0000 (0:00:01.454) 0:04:14.990 ******* 2026-03-25 03:24:55.302343 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-25 03:24:55.302348 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-25 03:24:55.302353 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-25 03:24:55.302358 | orchestrator | 2026-03-25 03:24:55.302362 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-25 03:24:55.302367 | orchestrator | Wednesday 25 March 2026 03:24:36 +0000 (0:00:01.226) 0:04:16.217 ******* 2026-03-25 03:24:55.302372 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-25 03:24:55.302377 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-25 03:24:55.302382 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-25 03:24:55.302386 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-25 03:24:55.302391 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-25 03:24:55.302396 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-25 03:24:55.302401 | orchestrator | 2026-03-25 03:24:55.302405 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-25 03:24:55.302410 | orchestrator | Wednesday 25 March 2026 03:24:40 +0000 (0:00:03.963) 0:04:20.181 ******* 2026-03-25 03:24:55.302416 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:24:55.302422 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:24:55.302427 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:24:55.302432 | orchestrator | 2026-03-25 03:24:55.302437 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-25 03:24:55.302441 | orchestrator | Wednesday 25 March 2026 03:24:40 +0000 (0:00:00.299) 0:04:20.481 ******* 2026-03-25 03:24:55.302446 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:24:55.302451 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:24:55.302456 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:24:55.302461 | orchestrator | 2026-03-25 03:24:55.302466 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-25 03:24:55.302471 | orchestrator | Wednesday 25 March 2026 03:24:40 +0000 (0:00:00.454) 0:04:20.936 ******* 2026-03-25 03:24:55.302476 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:24:55.302481 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:24:55.302486 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:24:55.302490 | orchestrator | 2026-03-25 03:24:55.302495 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-25 03:24:55.302500 | orchestrator | Wednesday 25 March 2026 03:24:42 +0000 (0:00:01.471) 0:04:22.407 ******* 2026-03-25 03:24:55.302506 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-25 03:24:55.302531 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-25 03:24:55.302536 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-25 03:24:55.302542 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-25 03:24:55.302547 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-25 03:24:55.302552 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-25 03:24:55.302557 | orchestrator | 2026-03-25 03:24:55.302561 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-25 03:24:55.302566 | orchestrator | Wednesday 25 March 2026 03:24:45 +0000 (0:00:03.559) 0:04:25.967 ******* 2026-03-25 03:24:55.302571 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-25 03:24:55.302576 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-25 03:24:55.302581 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-25 03:24:55.302586 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-25 03:24:55.302590 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:24:55.302595 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-25 03:24:55.302600 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:24:55.302605 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-25 03:24:55.302655 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:24:55.302660 | orchestrator | 2026-03-25 03:24:55.302665 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-25 03:24:55.302670 | orchestrator | Wednesday 25 March 2026 03:24:49 +0000 (0:00:03.528) 0:04:29.496 ******* 2026-03-25 03:24:55.302675 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:24:55.302680 | orchestrator | 2026-03-25 03:24:55.302696 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-25 03:24:55.302702 | orchestrator | Wednesday 25 March 2026 03:24:49 +0000 (0:00:00.150) 0:04:29.647 ******* 2026-03-25 03:24:55.302707 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:24:55.302712 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:24:55.302717 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:24:55.302722 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:55.302727 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:55.302732 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:55.302738 | orchestrator | 2026-03-25 03:24:55.302743 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-25 03:24:55.302749 | orchestrator | Wednesday 25 March 2026 03:24:50 +0000 (0:00:00.886) 0:04:30.534 ******* 2026-03-25 03:24:55.302755 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 03:24:55.302761 | orchestrator | 2026-03-25 03:24:55.302766 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-25 03:24:55.302772 | orchestrator | Wednesday 25 March 2026 03:24:51 +0000 (0:00:00.763) 0:04:31.298 ******* 2026-03-25 03:24:55.302781 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:24:55.302787 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:24:55.302793 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:24:55.302798 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:24:55.302803 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:24:55.302809 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:24:55.302814 | orchestrator | 2026-03-25 03:24:55.302820 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-25 03:24:55.302826 | orchestrator | Wednesday 25 March 2026 03:24:52 +0000 (0:00:00.943) 0:04:32.242 ******* 2026-03-25 03:24:55.302839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:24:55.302848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:24:55.302854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:24:55.302865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:25:00.442911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443150 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:00.443165 | orchestrator | 2026-03-25 03:25:00.443173 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-25 03:25:00.443181 | orchestrator | Wednesday 25 March 2026 03:24:55 +0000 (0:00:03.336) 0:04:35.578 ******* 2026-03-25 03:25:00.443190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:25:02.587117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:25:02.587269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:25:02.587293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:25:02.587312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:25:02.587328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:25:02.587368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:02.587407 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:02.587426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:02.587445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:25:02.587462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:25:02.587481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:25:02.587509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:21.842985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:21.843092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:21.843102 | orchestrator | 2026-03-25 03:25:21.843110 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-25 03:25:21.843117 | orchestrator | Wednesday 25 March 2026 03:25:02 +0000 (0:00:06.974) 0:04:42.552 ******* 2026-03-25 03:25:21.843122 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:25:21.843128 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:25:21.843134 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:25:21.843139 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:21.843144 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:21.843149 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:21.843154 | orchestrator | 2026-03-25 03:25:21.843159 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-25 03:25:21.843165 | orchestrator | Wednesday 25 March 2026 03:25:04 +0000 (0:00:01.481) 0:04:44.033 ******* 2026-03-25 03:25:21.843170 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-25 03:25:21.843176 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-25 03:25:21.843181 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-25 03:25:21.843186 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-25 03:25:21.843191 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-25 03:25:21.843196 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-25 03:25:21.843202 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-25 03:25:21.843207 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:21.843213 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-25 03:25:21.843218 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:21.843223 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-25 03:25:21.843228 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:21.843233 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-25 03:25:21.843239 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-25 03:25:21.843262 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-25 03:25:21.843270 | orchestrator | 2026-03-25 03:25:21.843293 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-25 03:25:21.843309 | orchestrator | Wednesday 25 March 2026 03:25:08 +0000 (0:00:04.033) 0:04:48.066 ******* 2026-03-25 03:25:21.843317 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:25:21.843325 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:25:21.843332 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:25:21.843340 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:21.843348 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:21.843356 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:21.843363 | orchestrator | 2026-03-25 03:25:21.843371 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-25 03:25:21.843379 | orchestrator | Wednesday 25 March 2026 03:25:08 +0000 (0:00:00.727) 0:04:48.794 ******* 2026-03-25 03:25:21.843387 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-25 03:25:21.843396 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-25 03:25:21.843404 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-25 03:25:21.843412 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-25 03:25:21.843438 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-25 03:25:21.843446 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-25 03:25:21.843461 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-25 03:25:21.843469 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-25 03:25:21.843477 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-25 03:25:21.843485 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-25 03:25:21.843493 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:21.843501 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-25 03:25:21.843509 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:21.843517 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-25 03:25:21.843525 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:21.843534 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-25 03:25:21.843543 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-25 03:25:21.843551 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-25 03:25:21.843560 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-25 03:25:21.843567 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-25 03:25:21.843576 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-25 03:25:21.843602 | orchestrator | 2026-03-25 03:25:21.843610 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-25 03:25:21.843619 | orchestrator | Wednesday 25 March 2026 03:25:14 +0000 (0:00:05.523) 0:04:54.317 ******* 2026-03-25 03:25:21.843638 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-25 03:25:21.843647 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-25 03:25:21.843655 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-25 03:25:21.843664 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-25 03:25:21.843673 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-25 03:25:21.843681 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-25 03:25:21.843690 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-25 03:25:21.843699 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-25 03:25:21.843708 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-25 03:25:21.843716 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-25 03:25:21.843724 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-25 03:25:21.843733 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-25 03:25:21.843742 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:21.843751 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-25 03:25:21.843759 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-25 03:25:21.843768 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-25 03:25:21.843777 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:21.843783 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-25 03:25:21.843789 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-25 03:25:21.843795 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:21.843801 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-25 03:25:21.843807 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-25 03:25:21.843813 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-25 03:25:21.843819 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-25 03:25:21.843824 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-25 03:25:21.843845 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-25 03:25:26.871167 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-25 03:25:26.871244 | orchestrator | 2026-03-25 03:25:26.871263 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-25 03:25:26.871269 | orchestrator | Wednesday 25 March 2026 03:25:21 +0000 (0:00:07.470) 0:05:01.788 ******* 2026-03-25 03:25:26.871274 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:25:26.871280 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:25:26.871285 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:25:26.871290 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:26.871295 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:26.871300 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:26.871305 | orchestrator | 2026-03-25 03:25:26.871310 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-25 03:25:26.871314 | orchestrator | Wednesday 25 March 2026 03:25:22 +0000 (0:00:00.920) 0:05:02.709 ******* 2026-03-25 03:25:26.871319 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:25:26.871343 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:25:26.871348 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:25:26.871353 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:26.871359 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:26.871367 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:26.871375 | orchestrator | 2026-03-25 03:25:26.871381 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-25 03:25:26.871385 | orchestrator | Wednesday 25 March 2026 03:25:23 +0000 (0:00:00.679) 0:05:03.388 ******* 2026-03-25 03:25:26.871390 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:26.871396 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:25:26.871400 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:26.871405 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:25:26.871410 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:25:26.871414 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:26.871419 | orchestrator | 2026-03-25 03:25:26.871424 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-25 03:25:26.871428 | orchestrator | Wednesday 25 March 2026 03:25:25 +0000 (0:00:02.150) 0:05:05.538 ******* 2026-03-25 03:25:26.871435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:25:26.871444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:25:26.871451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-25 03:25:26.871459 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:25:26.871487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:25:26.871504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-25 03:25:26.871512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:25:26.871519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-25 03:25:26.871527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-25 03:25:26.871541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-25 03:25:30.563493 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:25:30.563677 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:25:30.563693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-25 03:25:30.563703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 03:25:30.563711 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:30.563717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-25 03:25:30.563724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 03:25:30.563731 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:30.563738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-25 03:25:30.563745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 03:25:30.563773 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:30.563779 | orchestrator | 2026-03-25 03:25:30.563787 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-25 03:25:30.563795 | orchestrator | Wednesday 25 March 2026 03:25:27 +0000 (0:00:01.544) 0:05:07.083 ******* 2026-03-25 03:25:30.563801 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-25 03:25:30.563825 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-25 03:25:30.563837 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:25:30.563844 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-25 03:25:30.563850 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-25 03:25:30.563856 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:25:30.563862 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-25 03:25:30.563868 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-25 03:25:30.563874 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:25:30.563880 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-25 03:25:30.563886 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-25 03:25:30.563893 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:30.563899 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-25 03:25:30.563905 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-25 03:25:30.563911 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:30.563916 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-25 03:25:30.563923 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-25 03:25:30.563930 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:30.563936 | orchestrator | 2026-03-25 03:25:30.563942 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-25 03:25:30.563949 | orchestrator | Wednesday 25 March 2026 03:25:28 +0000 (0:00:00.990) 0:05:08.074 ******* 2026-03-25 03:25:30.563958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:25:30.563965 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:25:30.563975 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-25 03:25:30.563999 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 03:25:32.854403 | orchestrator | 2026-03-25 03:25:32.854408 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-25 03:25:32.854413 | orchestrator | Wednesday 25 March 2026 03:25:30 +0000 (0:00:02.676) 0:05:10.750 ******* 2026-03-25 03:25:32.854417 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:25:32.854422 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:25:32.854426 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:25:32.854429 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:25:32.854433 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:25:32.854437 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:25:32.854440 | orchestrator | 2026-03-25 03:25:32.854444 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-25 03:25:32.854448 | orchestrator | Wednesday 25 March 2026 03:25:31 +0000 (0:00:00.920) 0:05:11.670 ******* 2026-03-25 03:25:32.854452 | orchestrator | 2026-03-25 03:25:32.854456 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-25 03:25:32.854459 | orchestrator | Wednesday 25 March 2026 03:25:31 +0000 (0:00:00.161) 0:05:11.832 ******* 2026-03-25 03:25:32.854463 | orchestrator | 2026-03-25 03:25:32.854467 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-25 03:25:32.854474 | orchestrator | Wednesday 25 March 2026 03:25:32 +0000 (0:00:00.150) 0:05:11.983 ******* 2026-03-25 03:25:32.854477 | orchestrator | 2026-03-25 03:25:32.854481 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-25 03:25:32.854488 | orchestrator | Wednesday 25 March 2026 03:25:32 +0000 (0:00:00.147) 0:05:12.131 ******* 2026-03-25 03:28:49.616209 | orchestrator | 2026-03-25 03:28:49.616303 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-25 03:28:49.616312 | orchestrator | Wednesday 25 March 2026 03:25:32 +0000 (0:00:00.170) 0:05:12.302 ******* 2026-03-25 03:28:49.616316 | orchestrator | 2026-03-25 03:28:49.616321 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-25 03:28:49.616325 | orchestrator | Wednesday 25 March 2026 03:25:32 +0000 (0:00:00.331) 0:05:12.633 ******* 2026-03-25 03:28:49.616329 | orchestrator | 2026-03-25 03:28:49.616333 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-25 03:28:49.616337 | orchestrator | Wednesday 25 March 2026 03:25:32 +0000 (0:00:00.177) 0:05:12.810 ******* 2026-03-25 03:28:49.616341 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:28:49.616346 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:28:49.616350 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:28:49.616353 | orchestrator | 2026-03-25 03:28:49.616357 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-25 03:28:49.616381 | orchestrator | Wednesday 25 March 2026 03:25:39 +0000 (0:00:07.039) 0:05:19.850 ******* 2026-03-25 03:28:49.616386 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:28:49.616389 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:28:49.616393 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:28:49.616397 | orchestrator | 2026-03-25 03:28:49.616401 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-25 03:28:49.616423 | orchestrator | Wednesday 25 March 2026 03:25:59 +0000 (0:00:19.323) 0:05:39.173 ******* 2026-03-25 03:28:49.616428 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:28:49.616431 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:28:49.616435 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:28:49.616439 | orchestrator | 2026-03-25 03:28:49.616443 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-25 03:28:49.616446 | orchestrator | Wednesday 25 March 2026 03:26:21 +0000 (0:00:22.076) 0:06:01.250 ******* 2026-03-25 03:28:49.616450 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:28:49.616454 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:28:49.616464 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:28:49.616468 | orchestrator | 2026-03-25 03:28:49.616478 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-25 03:28:49.616482 | orchestrator | Wednesday 25 March 2026 03:27:02 +0000 (0:00:41.535) 0:06:42.786 ******* 2026-03-25 03:28:49.616486 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-25 03:28:49.616491 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:28:49.616495 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-25 03:28:49.616498 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:28:49.616502 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:28:49.616506 | orchestrator | 2026-03-25 03:28:49.616512 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-25 03:28:49.616521 | orchestrator | Wednesday 25 March 2026 03:27:09 +0000 (0:00:06.309) 0:06:49.095 ******* 2026-03-25 03:28:49.616529 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:28:49.616534 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:28:49.616540 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:28:49.616546 | orchestrator | 2026-03-25 03:28:49.616551 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-25 03:28:49.616557 | orchestrator | Wednesday 25 March 2026 03:27:09 +0000 (0:00:00.830) 0:06:49.925 ******* 2026-03-25 03:28:49.616563 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:28:49.616569 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:28:49.616574 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:28:49.616580 | orchestrator | 2026-03-25 03:28:49.616587 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-25 03:28:49.616593 | orchestrator | Wednesday 25 March 2026 03:27:36 +0000 (0:00:26.600) 0:07:16.526 ******* 2026-03-25 03:28:49.616600 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:28:49.616606 | orchestrator | 2026-03-25 03:28:49.616613 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-25 03:28:49.616617 | orchestrator | Wednesday 25 March 2026 03:27:36 +0000 (0:00:00.143) 0:07:16.670 ******* 2026-03-25 03:28:49.616621 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:28:49.616625 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:28:49.616628 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:28:49.616632 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:28:49.616636 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:28:49.616641 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-25 03:28:49.616647 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:28:49.616651 | orchestrator | 2026-03-25 03:28:49.616654 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-25 03:28:49.616658 | orchestrator | Wednesday 25 March 2026 03:28:00 +0000 (0:00:23.852) 0:07:40.522 ******* 2026-03-25 03:28:49.616662 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:28:49.616665 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:28:49.616669 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:28:49.616673 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:28:49.616681 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:28:49.616685 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:28:49.616689 | orchestrator | 2026-03-25 03:28:49.616693 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-25 03:28:49.616696 | orchestrator | Wednesday 25 March 2026 03:28:11 +0000 (0:00:11.202) 0:07:51.725 ******* 2026-03-25 03:28:49.616700 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:28:49.616704 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:28:49.616719 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:28:49.616723 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:28:49.616727 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:28:49.616731 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-25 03:28:49.616736 | orchestrator | 2026-03-25 03:28:49.616752 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-25 03:28:49.616757 | orchestrator | Wednesday 25 March 2026 03:28:16 +0000 (0:00:04.929) 0:07:56.654 ******* 2026-03-25 03:28:49.616762 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:28:49.616766 | orchestrator | 2026-03-25 03:28:49.616771 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-25 03:28:49.616775 | orchestrator | Wednesday 25 March 2026 03:28:29 +0000 (0:00:13.199) 0:08:09.853 ******* 2026-03-25 03:28:49.616780 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:28:49.616784 | orchestrator | 2026-03-25 03:28:49.616788 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-25 03:28:49.616793 | orchestrator | Wednesday 25 March 2026 03:28:31 +0000 (0:00:01.914) 0:08:11.768 ******* 2026-03-25 03:28:49.616799 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:28:49.616805 | orchestrator | 2026-03-25 03:28:49.616815 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-25 03:28:49.616822 | orchestrator | Wednesday 25 March 2026 03:28:33 +0000 (0:00:02.049) 0:08:13.817 ******* 2026-03-25 03:28:49.616828 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-25 03:28:49.616834 | orchestrator | 2026-03-25 03:28:49.616840 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-25 03:28:49.616847 | orchestrator | Wednesday 25 March 2026 03:28:45 +0000 (0:00:11.205) 0:08:25.023 ******* 2026-03-25 03:28:49.616853 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:28:49.616861 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:28:49.616867 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:28:49.616875 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:28:49.616879 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:28:49.616884 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:28:49.616888 | orchestrator | 2026-03-25 03:28:49.616895 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-25 03:28:49.616902 | orchestrator | 2026-03-25 03:28:49.616912 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-25 03:28:49.616917 | orchestrator | Wednesday 25 March 2026 03:28:47 +0000 (0:00:01.986) 0:08:27.010 ******* 2026-03-25 03:28:49.616923 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:28:49.616929 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:28:49.616935 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:28:49.616941 | orchestrator | 2026-03-25 03:28:49.616947 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-25 03:28:49.616952 | orchestrator | 2026-03-25 03:28:49.616958 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-25 03:28:49.616963 | orchestrator | Wednesday 25 March 2026 03:28:48 +0000 (0:00:01.010) 0:08:28.021 ******* 2026-03-25 03:28:49.616969 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:28:49.616976 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:28:49.616982 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:28:49.616988 | orchestrator | 2026-03-25 03:28:49.616994 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-25 03:28:49.617007 | orchestrator | 2026-03-25 03:28:49.617011 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-25 03:28:49.617016 | orchestrator | Wednesday 25 March 2026 03:28:48 +0000 (0:00:00.865) 0:08:28.886 ******* 2026-03-25 03:28:49.617020 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-25 03:28:49.617025 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-25 03:28:49.617029 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-25 03:28:49.617033 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-25 03:28:49.617038 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-25 03:28:49.617042 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-25 03:28:49.617046 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:28:49.617050 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-25 03:28:49.617054 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-25 03:28:49.617059 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-25 03:28:49.617063 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-25 03:28:49.617067 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-25 03:28:49.617071 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-25 03:28:49.617076 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:28:49.617080 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-25 03:28:49.617084 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-25 03:28:49.617089 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-25 03:28:49.617093 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-25 03:28:49.617097 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-25 03:28:49.617102 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-25 03:28:49.617106 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:28:49.617110 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-25 03:28:49.617114 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-25 03:28:49.617119 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-25 03:28:49.617123 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-25 03:28:49.617127 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-25 03:28:49.617132 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-25 03:28:49.617141 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:28:49.617145 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-25 03:28:49.617150 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-25 03:28:49.617154 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-25 03:28:49.617163 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-25 03:28:53.349890 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-25 03:28:53.349985 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-25 03:28:53.349997 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:28:53.350005 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-25 03:28:53.350054 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-25 03:28:53.350061 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-25 03:28:53.350066 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-25 03:28:53.350072 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-25 03:28:53.350079 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-25 03:28:53.350086 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:28:53.350115 | orchestrator | 2026-03-25 03:28:53.350123 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-25 03:28:53.350130 | orchestrator | 2026-03-25 03:28:53.350137 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-25 03:28:53.350143 | orchestrator | Wednesday 25 March 2026 03:28:50 +0000 (0:00:01.649) 0:08:30.535 ******* 2026-03-25 03:28:53.350150 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-25 03:28:53.350157 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-25 03:28:53.350164 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:28:53.350171 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-25 03:28:53.350177 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-25 03:28:53.350183 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:28:53.350190 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-25 03:28:53.350196 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-25 03:28:53.350203 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:28:53.350209 | orchestrator | 2026-03-25 03:28:53.350216 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-25 03:28:53.350222 | orchestrator | 2026-03-25 03:28:53.350228 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-25 03:28:53.350235 | orchestrator | Wednesday 25 March 2026 03:28:51 +0000 (0:00:00.697) 0:08:31.233 ******* 2026-03-25 03:28:53.350241 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:28:53.350247 | orchestrator | 2026-03-25 03:28:53.350253 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-25 03:28:53.350260 | orchestrator | 2026-03-25 03:28:53.350266 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-25 03:28:53.350273 | orchestrator | Wednesday 25 March 2026 03:28:52 +0000 (0:00:01.032) 0:08:32.265 ******* 2026-03-25 03:28:53.350279 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:28:53.350285 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:28:53.350292 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:28:53.350298 | orchestrator | 2026-03-25 03:28:53.350304 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:28:53.350311 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:28:53.350320 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-25 03:28:53.350327 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-25 03:28:53.350333 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-25 03:28:53.350340 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-25 03:28:53.350346 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-25 03:28:53.350352 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-25 03:28:53.350395 | orchestrator | 2026-03-25 03:28:53.350404 | orchestrator | 2026-03-25 03:28:53.350410 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:28:53.350416 | orchestrator | Wednesday 25 March 2026 03:28:52 +0000 (0:00:00.521) 0:08:32.787 ******* 2026-03-25 03:28:53.350423 | orchestrator | =============================================================================== 2026-03-25 03:28:53.350429 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 41.54s 2026-03-25 03:28:53.350442 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.83s 2026-03-25 03:28:53.350448 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.60s 2026-03-25 03:28:53.350455 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.85s 2026-03-25 03:28:53.350473 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.08s 2026-03-25 03:28:53.350481 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.52s 2026-03-25 03:28:53.350488 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.32s 2026-03-25 03:28:53.350494 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.34s 2026-03-25 03:28:53.350517 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.95s 2026-03-25 03:28:53.350523 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.20s 2026-03-25 03:28:53.350530 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.99s 2026-03-25 03:28:53.350536 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.41s 2026-03-25 03:28:53.350543 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.23s 2026-03-25 03:28:53.350549 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.22s 2026-03-25 03:28:53.350556 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.21s 2026-03-25 03:28:53.350562 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.20s 2026-03-25 03:28:53.350569 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.47s 2026-03-25 03:28:53.350576 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.16s 2026-03-25 03:28:53.350582 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.10s 2026-03-25 03:28:53.350589 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.04s 2026-03-25 03:28:56.169116 | orchestrator | 2026-03-25 03:28:56 | INFO  | Task 513cd317-8676-4f9b-a0a3-f394dd9c2948 (horizon) was prepared for execution. 2026-03-25 03:28:56.169194 | orchestrator | 2026-03-25 03:28:56 | INFO  | It takes a moment until task 513cd317-8676-4f9b-a0a3-f394dd9c2948 (horizon) has been started and output is visible here. 2026-03-25 03:29:04.316211 | orchestrator | 2026-03-25 03:29:04.316284 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:29:04.316290 | orchestrator | 2026-03-25 03:29:04.316295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:29:04.316300 | orchestrator | Wednesday 25 March 2026 03:29:00 +0000 (0:00:00.316) 0:00:00.316 ******* 2026-03-25 03:29:04.316304 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:04.316308 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:04.316312 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:04.316316 | orchestrator | 2026-03-25 03:29:04.316320 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:29:04.316324 | orchestrator | Wednesday 25 March 2026 03:29:01 +0000 (0:00:00.371) 0:00:00.688 ******* 2026-03-25 03:29:04.316328 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-25 03:29:04.316333 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-25 03:29:04.316337 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-25 03:29:04.316341 | orchestrator | 2026-03-25 03:29:04.316345 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-25 03:29:04.316385 | orchestrator | 2026-03-25 03:29:04.316389 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-25 03:29:04.316393 | orchestrator | Wednesday 25 March 2026 03:29:01 +0000 (0:00:00.479) 0:00:01.167 ******* 2026-03-25 03:29:04.316398 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:29:04.316421 | orchestrator | 2026-03-25 03:29:04.316426 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-25 03:29:04.316429 | orchestrator | Wednesday 25 March 2026 03:29:02 +0000 (0:00:00.589) 0:00:01.757 ******* 2026-03-25 03:29:04.316447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 03:29:04.316466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 03:29:04.316478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 03:29:04.316482 | orchestrator | 2026-03-25 03:29:04.316486 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-25 03:29:04.316490 | orchestrator | Wednesday 25 March 2026 03:29:03 +0000 (0:00:01.214) 0:00:02.971 ******* 2026-03-25 03:29:04.316494 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:04.316497 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:04.316501 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:04.316505 | orchestrator | 2026-03-25 03:29:04.316509 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-25 03:29:04.316512 | orchestrator | Wednesday 25 March 2026 03:29:04 +0000 (0:00:00.544) 0:00:03.516 ******* 2026-03-25 03:29:04.316519 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-25 03:29:10.996963 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-25 03:29:10.997054 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-25 03:29:10.997060 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-25 03:29:10.997065 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-25 03:29:10.997069 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-25 03:29:10.997088 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-25 03:29:10.997092 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-25 03:29:10.997096 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-25 03:29:10.997100 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-25 03:29:10.997104 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-25 03:29:10.997108 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-25 03:29:10.997111 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-25 03:29:10.997116 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-25 03:29:10.997119 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-25 03:29:10.997123 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-25 03:29:10.997127 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-25 03:29:10.997131 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-25 03:29:10.997134 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-25 03:29:10.997138 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-25 03:29:10.997142 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-25 03:29:10.997145 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-25 03:29:10.997149 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-25 03:29:10.997153 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-25 03:29:10.997158 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-25 03:29:10.997167 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-25 03:29:10.997173 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-25 03:29:10.997192 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-25 03:29:10.997202 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-25 03:29:10.997209 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-25 03:29:10.997216 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-25 03:29:10.997222 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-25 03:29:10.997229 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-25 03:29:10.997237 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-25 03:29:10.997243 | orchestrator | 2026-03-25 03:29:10.997256 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:10.997264 | orchestrator | Wednesday 25 March 2026 03:29:05 +0000 (0:00:00.849) 0:00:04.366 ******* 2026-03-25 03:29:10.997270 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:10.997277 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:10.997284 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:10.997291 | orchestrator | 2026-03-25 03:29:10.997297 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:10.997304 | orchestrator | Wednesday 25 March 2026 03:29:05 +0000 (0:00:00.342) 0:00:04.709 ******* 2026-03-25 03:29:10.997311 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997316 | orchestrator | 2026-03-25 03:29:10.997332 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:10.997336 | orchestrator | Wednesday 25 March 2026 03:29:05 +0000 (0:00:00.334) 0:00:05.043 ******* 2026-03-25 03:29:10.997393 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997399 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:10.997403 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:10.997407 | orchestrator | 2026-03-25 03:29:10.997411 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:10.997415 | orchestrator | Wednesday 25 March 2026 03:29:06 +0000 (0:00:00.330) 0:00:05.374 ******* 2026-03-25 03:29:10.997418 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:10.997422 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:10.997426 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:10.997429 | orchestrator | 2026-03-25 03:29:10.997433 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:10.997437 | orchestrator | Wednesday 25 March 2026 03:29:06 +0000 (0:00:00.349) 0:00:05.723 ******* 2026-03-25 03:29:10.997441 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997444 | orchestrator | 2026-03-25 03:29:10.997448 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:10.997452 | orchestrator | Wednesday 25 March 2026 03:29:06 +0000 (0:00:00.149) 0:00:05.873 ******* 2026-03-25 03:29:10.997456 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997460 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:10.997463 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:10.997467 | orchestrator | 2026-03-25 03:29:10.997471 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:10.997474 | orchestrator | Wednesday 25 March 2026 03:29:06 +0000 (0:00:00.319) 0:00:06.193 ******* 2026-03-25 03:29:10.997478 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:10.997482 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:10.997486 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:10.997489 | orchestrator | 2026-03-25 03:29:10.997493 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:10.997497 | orchestrator | Wednesday 25 March 2026 03:29:07 +0000 (0:00:00.632) 0:00:06.825 ******* 2026-03-25 03:29:10.997500 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997504 | orchestrator | 2026-03-25 03:29:10.997508 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:10.997512 | orchestrator | Wednesday 25 March 2026 03:29:07 +0000 (0:00:00.152) 0:00:06.978 ******* 2026-03-25 03:29:10.997515 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997519 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:10.997523 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:10.997527 | orchestrator | 2026-03-25 03:29:10.997531 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:10.997535 | orchestrator | Wednesday 25 March 2026 03:29:07 +0000 (0:00:00.337) 0:00:07.315 ******* 2026-03-25 03:29:10.997540 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:10.997544 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:10.997548 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:10.997552 | orchestrator | 2026-03-25 03:29:10.997556 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:10.997565 | orchestrator | Wednesday 25 March 2026 03:29:08 +0000 (0:00:00.344) 0:00:07.660 ******* 2026-03-25 03:29:10.997569 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997573 | orchestrator | 2026-03-25 03:29:10.997578 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:10.997582 | orchestrator | Wednesday 25 March 2026 03:29:08 +0000 (0:00:00.128) 0:00:07.789 ******* 2026-03-25 03:29:10.997586 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997591 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:10.997597 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:10.997605 | orchestrator | 2026-03-25 03:29:10.997620 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:10.997626 | orchestrator | Wednesday 25 March 2026 03:29:09 +0000 (0:00:00.618) 0:00:08.408 ******* 2026-03-25 03:29:10.997632 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:10.997638 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:10.997646 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:10.997651 | orchestrator | 2026-03-25 03:29:10.997655 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:10.997659 | orchestrator | Wednesday 25 March 2026 03:29:09 +0000 (0:00:00.386) 0:00:08.794 ******* 2026-03-25 03:29:10.997663 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997667 | orchestrator | 2026-03-25 03:29:10.997672 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:10.997676 | orchestrator | Wednesday 25 March 2026 03:29:09 +0000 (0:00:00.135) 0:00:08.930 ******* 2026-03-25 03:29:10.997680 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997685 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:10.997689 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:10.997693 | orchestrator | 2026-03-25 03:29:10.997698 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:10.997702 | orchestrator | Wednesday 25 March 2026 03:29:09 +0000 (0:00:00.319) 0:00:09.249 ******* 2026-03-25 03:29:10.997706 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:10.997710 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:10.997715 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:10.997719 | orchestrator | 2026-03-25 03:29:10.997723 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:10.997727 | orchestrator | Wednesday 25 March 2026 03:29:10 +0000 (0:00:00.336) 0:00:09.585 ******* 2026-03-25 03:29:10.997732 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997736 | orchestrator | 2026-03-25 03:29:10.997741 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:10.997745 | orchestrator | Wednesday 25 March 2026 03:29:10 +0000 (0:00:00.366) 0:00:09.951 ******* 2026-03-25 03:29:10.997749 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:10.997754 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:10.997758 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:10.997762 | orchestrator | 2026-03-25 03:29:10.997766 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:10.997776 | orchestrator | Wednesday 25 March 2026 03:29:10 +0000 (0:00:00.367) 0:00:10.319 ******* 2026-03-25 03:29:25.962473 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:25.962599 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:25.962615 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:25.962628 | orchestrator | 2026-03-25 03:29:25.962642 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:25.962654 | orchestrator | Wednesday 25 March 2026 03:29:11 +0000 (0:00:00.364) 0:00:10.683 ******* 2026-03-25 03:29:25.962666 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.962677 | orchestrator | 2026-03-25 03:29:25.962689 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:25.962702 | orchestrator | Wednesday 25 March 2026 03:29:11 +0000 (0:00:00.144) 0:00:10.827 ******* 2026-03-25 03:29:25.962741 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.962755 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:25.962767 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:25.962780 | orchestrator | 2026-03-25 03:29:25.962793 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:25.962806 | orchestrator | Wednesday 25 March 2026 03:29:11 +0000 (0:00:00.348) 0:00:11.176 ******* 2026-03-25 03:29:25.962817 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:25.962830 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:25.962842 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:25.962855 | orchestrator | 2026-03-25 03:29:25.962867 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:25.962879 | orchestrator | Wednesday 25 March 2026 03:29:12 +0000 (0:00:00.593) 0:00:11.770 ******* 2026-03-25 03:29:25.962891 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.962902 | orchestrator | 2026-03-25 03:29:25.962915 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:25.962928 | orchestrator | Wednesday 25 March 2026 03:29:12 +0000 (0:00:00.140) 0:00:11.910 ******* 2026-03-25 03:29:25.962940 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.962953 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:25.962965 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:25.962977 | orchestrator | 2026-03-25 03:29:25.962990 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:25.963004 | orchestrator | Wednesday 25 March 2026 03:29:12 +0000 (0:00:00.331) 0:00:12.241 ******* 2026-03-25 03:29:25.963017 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:25.963030 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:25.963042 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:25.963053 | orchestrator | 2026-03-25 03:29:25.963061 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:25.963069 | orchestrator | Wednesday 25 March 2026 03:29:13 +0000 (0:00:00.355) 0:00:12.597 ******* 2026-03-25 03:29:25.963077 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.963085 | orchestrator | 2026-03-25 03:29:25.963093 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:25.963102 | orchestrator | Wednesday 25 March 2026 03:29:13 +0000 (0:00:00.136) 0:00:12.734 ******* 2026-03-25 03:29:25.963110 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.963119 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:25.963127 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:25.963135 | orchestrator | 2026-03-25 03:29:25.963143 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-25 03:29:25.963151 | orchestrator | Wednesday 25 March 2026 03:29:14 +0000 (0:00:00.597) 0:00:13.332 ******* 2026-03-25 03:29:25.963159 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:29:25.963167 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:29:25.963175 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:29:25.963184 | orchestrator | 2026-03-25 03:29:25.963192 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-25 03:29:25.963200 | orchestrator | Wednesday 25 March 2026 03:29:14 +0000 (0:00:00.381) 0:00:13.714 ******* 2026-03-25 03:29:25.963225 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.963234 | orchestrator | 2026-03-25 03:29:25.963242 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-25 03:29:25.963249 | orchestrator | Wednesday 25 March 2026 03:29:14 +0000 (0:00:00.133) 0:00:13.847 ******* 2026-03-25 03:29:25.963256 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.963263 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:25.963270 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:25.963277 | orchestrator | 2026-03-25 03:29:25.963285 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-25 03:29:25.963292 | orchestrator | Wednesday 25 March 2026 03:29:14 +0000 (0:00:00.341) 0:00:14.189 ******* 2026-03-25 03:29:25.963299 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:29:25.963320 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:29:25.963357 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:29:25.963365 | orchestrator | 2026-03-25 03:29:25.963373 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-25 03:29:25.963380 | orchestrator | Wednesday 25 March 2026 03:29:16 +0000 (0:00:01.885) 0:00:16.075 ******* 2026-03-25 03:29:25.963387 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-25 03:29:25.963396 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-25 03:29:25.963407 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-25 03:29:25.963419 | orchestrator | 2026-03-25 03:29:25.963430 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-25 03:29:25.963441 | orchestrator | Wednesday 25 March 2026 03:29:18 +0000 (0:00:01.987) 0:00:18.062 ******* 2026-03-25 03:29:25.963454 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-25 03:29:25.963468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-25 03:29:25.963478 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-25 03:29:25.963488 | orchestrator | 2026-03-25 03:29:25.963498 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-25 03:29:25.963530 | orchestrator | Wednesday 25 March 2026 03:29:20 +0000 (0:00:02.006) 0:00:20.068 ******* 2026-03-25 03:29:25.963542 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-25 03:29:25.963553 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-25 03:29:25.963566 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-25 03:29:25.963578 | orchestrator | 2026-03-25 03:29:25.963590 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-25 03:29:25.963602 | orchestrator | Wednesday 25 March 2026 03:29:22 +0000 (0:00:01.568) 0:00:21.637 ******* 2026-03-25 03:29:25.963615 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.963625 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:25.963632 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:25.963640 | orchestrator | 2026-03-25 03:29:25.963647 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-25 03:29:25.963654 | orchestrator | Wednesday 25 March 2026 03:29:22 +0000 (0:00:00.551) 0:00:22.188 ******* 2026-03-25 03:29:25.963661 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:25.963668 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:25.963675 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:25.963683 | orchestrator | 2026-03-25 03:29:25.963690 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-25 03:29:25.963697 | orchestrator | Wednesday 25 March 2026 03:29:23 +0000 (0:00:00.369) 0:00:22.558 ******* 2026-03-25 03:29:25.963705 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:29:25.963712 | orchestrator | 2026-03-25 03:29:25.963719 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-25 03:29:25.963727 | orchestrator | Wednesday 25 March 2026 03:29:23 +0000 (0:00:00.724) 0:00:23.283 ******* 2026-03-25 03:29:25.963749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 03:29:25.963780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 03:29:26.724790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 03:29:26.724881 | orchestrator | 2026-03-25 03:29:26.724889 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-25 03:29:26.724896 | orchestrator | Wednesday 25 March 2026 03:29:25 +0000 (0:00:01.990) 0:00:25.273 ******* 2026-03-25 03:29:26.724914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 03:29:26.724924 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:26.724935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 03:29:26.724940 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:26.724949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 03:29:29.464573 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:29.464645 | orchestrator | 2026-03-25 03:29:29.464652 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-25 03:29:29.464670 | orchestrator | Wednesday 25 March 2026 03:29:26 +0000 (0:00:00.771) 0:00:26.045 ******* 2026-03-25 03:29:29.464676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 03:29:29.464683 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:29:29.464703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 03:29:29.464728 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:29:29.464766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 03:29:29.464774 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:29:29.464780 | orchestrator | 2026-03-25 03:29:29.464786 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-25 03:29:29.464793 | orchestrator | Wednesday 25 March 2026 03:29:27 +0000 (0:00:00.962) 0:00:27.007 ******* 2026-03-25 03:29:29.464823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 03:30:12.157913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 03:30:12.158140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 03:30:12.158162 | orchestrator | 2026-03-25 03:30:12.158170 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-25 03:30:12.158179 | orchestrator | Wednesday 25 March 2026 03:29:29 +0000 (0:00:01.776) 0:00:28.784 ******* 2026-03-25 03:30:12.158186 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:30:12.158194 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:30:12.158201 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:30:12.158208 | orchestrator | 2026-03-25 03:30:12.158214 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-25 03:30:12.158221 | orchestrator | Wednesday 25 March 2026 03:29:29 +0000 (0:00:00.381) 0:00:29.166 ******* 2026-03-25 03:30:12.158228 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:30:12.158236 | orchestrator | 2026-03-25 03:30:12.158242 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-25 03:30:12.158250 | orchestrator | Wednesday 25 March 2026 03:29:30 +0000 (0:00:00.623) 0:00:29.789 ******* 2026-03-25 03:30:12.158257 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:30:12.158264 | orchestrator | 2026-03-25 03:30:12.158270 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-25 03:30:12.158276 | orchestrator | Wednesday 25 March 2026 03:29:32 +0000 (0:00:02.165) 0:00:31.955 ******* 2026-03-25 03:30:12.158283 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:30:12.158340 | orchestrator | 2026-03-25 03:30:12.158347 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-25 03:30:12.158363 | orchestrator | Wednesday 25 March 2026 03:29:35 +0000 (0:00:02.739) 0:00:34.694 ******* 2026-03-25 03:30:12.158369 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:30:12.158375 | orchestrator | 2026-03-25 03:30:12.158381 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-25 03:30:12.158388 | orchestrator | Wednesday 25 March 2026 03:29:50 +0000 (0:00:15.211) 0:00:49.906 ******* 2026-03-25 03:30:12.158394 | orchestrator | 2026-03-25 03:30:12.158400 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-25 03:30:12.158407 | orchestrator | Wednesday 25 March 2026 03:29:50 +0000 (0:00:00.091) 0:00:49.998 ******* 2026-03-25 03:30:12.158414 | orchestrator | 2026-03-25 03:30:12.158420 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-25 03:30:12.158427 | orchestrator | Wednesday 25 March 2026 03:29:50 +0000 (0:00:00.070) 0:00:50.068 ******* 2026-03-25 03:30:12.158433 | orchestrator | 2026-03-25 03:30:12.158440 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-25 03:30:12.158446 | orchestrator | Wednesday 25 March 2026 03:29:50 +0000 (0:00:00.079) 0:00:50.148 ******* 2026-03-25 03:30:12.158453 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:30:12.158460 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:30:12.158468 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:30:12.158474 | orchestrator | 2026-03-25 03:30:12.158480 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:30:12.158488 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-25 03:30:12.158497 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-25 03:30:12.158516 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-25 03:30:12.158524 | orchestrator | 2026-03-25 03:30:12.158531 | orchestrator | 2026-03-25 03:30:12.158546 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:30:12.158554 | orchestrator | Wednesday 25 March 2026 03:30:12 +0000 (0:00:21.317) 0:01:11.466 ******* 2026-03-25 03:30:12.158562 | orchestrator | =============================================================================== 2026-03-25 03:30:12.158569 | orchestrator | horizon : Restart horizon container ------------------------------------ 21.32s 2026-03-25 03:30:12.158576 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.21s 2026-03-25 03:30:12.158583 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.74s 2026-03-25 03:30:12.158598 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.17s 2026-03-25 03:30:12.158605 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.01s 2026-03-25 03:30:12.158612 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.99s 2026-03-25 03:30:12.158619 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.99s 2026-03-25 03:30:12.158626 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.89s 2026-03-25 03:30:12.158633 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.78s 2026-03-25 03:30:12.158639 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.57s 2026-03-25 03:30:12.158645 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.21s 2026-03-25 03:30:12.158651 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.96s 2026-03-25 03:30:12.158658 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2026-03-25 03:30:12.158675 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.77s 2026-03-25 03:30:12.637275 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2026-03-25 03:30:12.637399 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2026-03-25 03:30:12.637407 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-03-25 03:30:12.637413 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.62s 2026-03-25 03:30:12.637418 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.60s 2026-03-25 03:30:12.637423 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-03-25 03:30:15.325810 | orchestrator | 2026-03-25 03:30:15 | INFO  | Task 2578cf68-ade5-4801-a242-48bf915814d5 (skyline) was prepared for execution. 2026-03-25 03:30:15.325897 | orchestrator | 2026-03-25 03:30:15 | INFO  | It takes a moment until task 2578cf68-ade5-4801-a242-48bf915814d5 (skyline) has been started and output is visible here. 2026-03-25 03:30:45.858119 | orchestrator | 2026-03-25 03:30:45.858203 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:30:45.858210 | orchestrator | 2026-03-25 03:30:45.858215 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:30:45.858220 | orchestrator | Wednesday 25 March 2026 03:30:20 +0000 (0:00:00.355) 0:00:00.355 ******* 2026-03-25 03:30:45.858224 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:30:45.858230 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:30:45.858233 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:30:45.858237 | orchestrator | 2026-03-25 03:30:45.858242 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:30:45.858246 | orchestrator | Wednesday 25 March 2026 03:30:20 +0000 (0:00:00.347) 0:00:00.702 ******* 2026-03-25 03:30:45.858250 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-03-25 03:30:45.858254 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-03-25 03:30:45.858315 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-03-25 03:30:45.858322 | orchestrator | 2026-03-25 03:30:45.858327 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-03-25 03:30:45.858334 | orchestrator | 2026-03-25 03:30:45.858340 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-25 03:30:45.858346 | orchestrator | Wednesday 25 March 2026 03:30:21 +0000 (0:00:00.505) 0:00:01.208 ******* 2026-03-25 03:30:45.858351 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:30:45.858356 | orchestrator | 2026-03-25 03:30:45.858360 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-03-25 03:30:45.858364 | orchestrator | Wednesday 25 March 2026 03:30:21 +0000 (0:00:00.586) 0:00:01.794 ******* 2026-03-25 03:30:45.858368 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-03-25 03:30:45.858372 | orchestrator | 2026-03-25 03:30:45.858376 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-03-25 03:30:45.858380 | orchestrator | Wednesday 25 March 2026 03:30:24 +0000 (0:00:03.162) 0:00:04.957 ******* 2026-03-25 03:30:45.858384 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-03-25 03:30:45.858388 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-03-25 03:30:45.858392 | orchestrator | 2026-03-25 03:30:45.858396 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-03-25 03:30:45.858400 | orchestrator | Wednesday 25 March 2026 03:30:30 +0000 (0:00:05.843) 0:00:10.800 ******* 2026-03-25 03:30:45.858403 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:30:45.858409 | orchestrator | 2026-03-25 03:30:45.858413 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-03-25 03:30:45.858433 | orchestrator | Wednesday 25 March 2026 03:30:33 +0000 (0:00:02.872) 0:00:13.672 ******* 2026-03-25 03:30:45.858437 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:30:45.858459 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-03-25 03:30:45.858464 | orchestrator | 2026-03-25 03:30:45.858467 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-03-25 03:30:45.858471 | orchestrator | Wednesday 25 March 2026 03:30:37 +0000 (0:00:03.850) 0:00:17.523 ******* 2026-03-25 03:30:45.858475 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:30:45.858479 | orchestrator | 2026-03-25 03:30:45.858493 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-03-25 03:30:45.858497 | orchestrator | Wednesday 25 March 2026 03:30:40 +0000 (0:00:02.990) 0:00:20.513 ******* 2026-03-25 03:30:45.858501 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-03-25 03:30:45.858505 | orchestrator | 2026-03-25 03:30:45.858509 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-03-25 03:30:45.858512 | orchestrator | Wednesday 25 March 2026 03:30:44 +0000 (0:00:03.953) 0:00:24.466 ******* 2026-03-25 03:30:45.858519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:45.858540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:45.858544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:45.858554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:45.858563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:45.858573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:50.177222 | orchestrator | 2026-03-25 03:30:50.177386 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-25 03:30:50.177402 | orchestrator | Wednesday 25 March 2026 03:30:45 +0000 (0:00:01.391) 0:00:25.857 ******* 2026-03-25 03:30:50.177413 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:30:50.177421 | orchestrator | 2026-03-25 03:30:50.177430 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-03-25 03:30:50.177438 | orchestrator | Wednesday 25 March 2026 03:30:46 +0000 (0:00:00.947) 0:00:26.805 ******* 2026-03-25 03:30:50.177450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:50.177507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:50.177520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:50.177549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:50.177560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:50.177577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:30:50.177586 | orchestrator | 2026-03-25 03:30:50.177595 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-03-25 03:30:50.177603 | orchestrator | Wednesday 25 March 2026 03:30:49 +0000 (0:00:02.607) 0:00:29.412 ******* 2026-03-25 03:30:50.177614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-25 03:30:50.177620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-25 03:30:50.177625 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:30:50.177636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-25 03:30:51.723708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-25 03:30:51.723806 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:30:51.723833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-25 03:30:51.723840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-25 03:30:51.723845 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:30:51.723851 | orchestrator | 2026-03-25 03:30:51.723858 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-03-25 03:30:51.723867 | orchestrator | Wednesday 25 March 2026 03:30:50 +0000 (0:00:00.775) 0:00:30.188 ******* 2026-03-25 03:30:51.723872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-25 03:30:51.723921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-25 03:30:51.723926 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:30:51.723933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-25 03:30:51.723938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-25 03:30:51.723941 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:30:51.723945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-25 03:30:51.723957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-25 03:31:00.437633 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:31:00.437732 | orchestrator | 2026-03-25 03:31:00.437745 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-03-25 03:31:00.437757 | orchestrator | Wednesday 25 March 2026 03:30:51 +0000 (0:00:01.539) 0:00:31.727 ******* 2026-03-25 03:31:00.437782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:00.437795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:00.437804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:00.437836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:00.437866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:00.437876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:00.437884 | orchestrator | 2026-03-25 03:31:00.437892 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-03-25 03:31:00.437900 | orchestrator | Wednesday 25 March 2026 03:30:54 +0000 (0:00:02.494) 0:00:34.221 ******* 2026-03-25 03:31:00.437908 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-25 03:31:00.437916 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-25 03:31:00.437924 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-25 03:31:00.437932 | orchestrator | 2026-03-25 03:31:00.437940 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-03-25 03:31:00.437948 | orchestrator | Wednesday 25 March 2026 03:30:55 +0000 (0:00:01.600) 0:00:35.822 ******* 2026-03-25 03:31:00.437962 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-25 03:31:00.437970 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-25 03:31:00.437978 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-25 03:31:00.437986 | orchestrator | 2026-03-25 03:31:00.437993 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-03-25 03:31:00.438001 | orchestrator | Wednesday 25 March 2026 03:30:58 +0000 (0:00:02.302) 0:00:38.124 ******* 2026-03-25 03:31:00.438009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:00.438088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:02.609414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:02.609519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:02.609558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:02.609572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:02.609584 | orchestrator | 2026-03-25 03:31:02.609598 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-03-25 03:31:02.609610 | orchestrator | Wednesday 25 March 2026 03:31:00 +0000 (0:00:02.324) 0:00:40.449 ******* 2026-03-25 03:31:02.609621 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:31:02.609634 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:31:02.609645 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:31:02.609656 | orchestrator | 2026-03-25 03:31:02.609685 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-03-25 03:31:02.609698 | orchestrator | Wednesday 25 March 2026 03:31:00 +0000 (0:00:00.333) 0:00:40.782 ******* 2026-03-25 03:31:02.609716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:02.609748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:02.609768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:02.609796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:02.609843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:36.190484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-25 03:31:36.190612 | orchestrator | 2026-03-25 03:31:36.190624 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-03-25 03:31:36.190633 | orchestrator | Wednesday 25 March 2026 03:31:02 +0000 (0:00:01.839) 0:00:42.622 ******* 2026-03-25 03:31:36.190639 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:31:36.190646 | orchestrator | 2026-03-25 03:31:36.190652 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-03-25 03:31:36.190658 | orchestrator | Wednesday 25 March 2026 03:31:04 +0000 (0:00:01.946) 0:00:44.568 ******* 2026-03-25 03:31:36.190663 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:31:36.190669 | orchestrator | 2026-03-25 03:31:36.190675 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-03-25 03:31:36.190681 | orchestrator | Wednesday 25 March 2026 03:31:06 +0000 (0:00:02.164) 0:00:46.733 ******* 2026-03-25 03:31:36.190686 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:31:36.190693 | orchestrator | 2026-03-25 03:31:36.190699 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-25 03:31:36.190704 | orchestrator | Wednesday 25 March 2026 03:31:14 +0000 (0:00:07.576) 0:00:54.310 ******* 2026-03-25 03:31:36.190710 | orchestrator | 2026-03-25 03:31:36.190716 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-25 03:31:36.190721 | orchestrator | Wednesday 25 March 2026 03:31:14 +0000 (0:00:00.075) 0:00:54.386 ******* 2026-03-25 03:31:36.190727 | orchestrator | 2026-03-25 03:31:36.190733 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-25 03:31:36.190739 | orchestrator | Wednesday 25 March 2026 03:31:14 +0000 (0:00:00.075) 0:00:54.461 ******* 2026-03-25 03:31:36.190744 | orchestrator | 2026-03-25 03:31:36.190750 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-03-25 03:31:36.190756 | orchestrator | Wednesday 25 March 2026 03:31:14 +0000 (0:00:00.081) 0:00:54.543 ******* 2026-03-25 03:31:36.190761 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:31:36.190767 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:31:36.190773 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:31:36.190778 | orchestrator | 2026-03-25 03:31:36.190784 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-03-25 03:31:36.190790 | orchestrator | Wednesday 25 March 2026 03:31:25 +0000 (0:00:11.346) 0:01:05.890 ******* 2026-03-25 03:31:36.190795 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:31:36.190801 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:31:36.190807 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:31:36.190812 | orchestrator | 2026-03-25 03:31:36.190818 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:31:36.190825 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 03:31:36.190833 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 03:31:36.190838 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 03:31:36.190844 | orchestrator | 2026-03-25 03:31:36.190850 | orchestrator | 2026-03-25 03:31:36.190861 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:31:36.190866 | orchestrator | Wednesday 25 March 2026 03:31:35 +0000 (0:00:09.903) 0:01:15.793 ******* 2026-03-25 03:31:36.190872 | orchestrator | =============================================================================== 2026-03-25 03:31:36.190878 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.35s 2026-03-25 03:31:36.190898 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.90s 2026-03-25 03:31:36.190904 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.58s 2026-03-25 03:31:36.190910 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 5.84s 2026-03-25 03:31:36.190916 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.95s 2026-03-25 03:31:36.190921 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.85s 2026-03-25 03:31:36.190927 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.16s 2026-03-25 03:31:36.190933 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 2.99s 2026-03-25 03:31:36.190952 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 2.87s 2026-03-25 03:31:36.190958 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.61s 2026-03-25 03:31:36.190964 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.49s 2026-03-25 03:31:36.190970 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.32s 2026-03-25 03:31:36.190975 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.30s 2026-03-25 03:31:36.190981 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.16s 2026-03-25 03:31:36.190987 | orchestrator | skyline : Creating Skyline database ------------------------------------- 1.95s 2026-03-25 03:31:36.190992 | orchestrator | skyline : Check skyline container --------------------------------------- 1.84s 2026-03-25 03:31:36.190998 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.60s 2026-03-25 03:31:36.191004 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.54s 2026-03-25 03:31:36.191010 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.39s 2026-03-25 03:31:36.191016 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.95s 2026-03-25 03:31:39.069516 | orchestrator | 2026-03-25 03:31:39 | INFO  | Task ba3cab4f-22e3-4e8e-a907-bc60f9a46838 (glance) was prepared for execution. 2026-03-25 03:31:39.069588 | orchestrator | 2026-03-25 03:31:39 | INFO  | It takes a moment until task ba3cab4f-22e3-4e8e-a907-bc60f9a46838 (glance) has been started and output is visible here. 2026-03-25 03:32:13.134770 | orchestrator | 2026-03-25 03:32:13.134923 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:32:13.134953 | orchestrator | 2026-03-25 03:32:13.134972 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:32:13.134990 | orchestrator | Wednesday 25 March 2026 03:31:43 +0000 (0:00:00.284) 0:00:00.284 ******* 2026-03-25 03:32:13.135008 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:32:13.135027 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:32:13.135043 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:32:13.135072 | orchestrator | 2026-03-25 03:32:13.135092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:32:13.135110 | orchestrator | Wednesday 25 March 2026 03:31:44 +0000 (0:00:00.340) 0:00:00.625 ******* 2026-03-25 03:32:13.135139 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-25 03:32:13.135160 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-25 03:32:13.135178 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-25 03:32:13.135272 | orchestrator | 2026-03-25 03:32:13.135299 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-25 03:32:13.135358 | orchestrator | 2026-03-25 03:32:13.135379 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-25 03:32:13.135398 | orchestrator | Wednesday 25 March 2026 03:31:44 +0000 (0:00:00.486) 0:00:01.111 ******* 2026-03-25 03:32:13.135413 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:32:13.135427 | orchestrator | 2026-03-25 03:32:13.135439 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-25 03:32:13.135451 | orchestrator | Wednesday 25 March 2026 03:31:45 +0000 (0:00:00.638) 0:00:01.750 ******* 2026-03-25 03:32:13.135464 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-25 03:32:13.135476 | orchestrator | 2026-03-25 03:32:13.135489 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-25 03:32:13.135502 | orchestrator | Wednesday 25 March 2026 03:31:48 +0000 (0:00:03.312) 0:00:05.063 ******* 2026-03-25 03:32:13.135515 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-25 03:32:13.135528 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-25 03:32:13.135541 | orchestrator | 2026-03-25 03:32:13.135553 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-25 03:32:13.135566 | orchestrator | Wednesday 25 March 2026 03:31:54 +0000 (0:00:06.030) 0:00:11.093 ******* 2026-03-25 03:32:13.135579 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:32:13.135592 | orchestrator | 2026-03-25 03:32:13.135604 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-25 03:32:13.135617 | orchestrator | Wednesday 25 March 2026 03:31:57 +0000 (0:00:03.145) 0:00:14.238 ******* 2026-03-25 03:32:13.135629 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:32:13.135640 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-25 03:32:13.135651 | orchestrator | 2026-03-25 03:32:13.135662 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-25 03:32:13.135673 | orchestrator | Wednesday 25 March 2026 03:32:01 +0000 (0:00:03.816) 0:00:18.055 ******* 2026-03-25 03:32:13.135701 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:32:13.135713 | orchestrator | 2026-03-25 03:32:13.135724 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-25 03:32:13.135735 | orchestrator | Wednesday 25 March 2026 03:32:04 +0000 (0:00:03.053) 0:00:21.108 ******* 2026-03-25 03:32:13.135745 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-25 03:32:13.135756 | orchestrator | 2026-03-25 03:32:13.135766 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-25 03:32:13.135777 | orchestrator | Wednesday 25 March 2026 03:32:08 +0000 (0:00:03.706) 0:00:24.814 ******* 2026-03-25 03:32:13.135820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:32:13.135848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:32:13.135866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:32:13.135879 | orchestrator | 2026-03-25 03:32:13.135889 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-25 03:32:13.135907 | orchestrator | Wednesday 25 March 2026 03:32:12 +0000 (0:00:03.850) 0:00:28.665 ******* 2026-03-25 03:32:13.135919 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:32:13.135930 | orchestrator | 2026-03-25 03:32:13.135949 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-25 03:32:29.700486 | orchestrator | Wednesday 25 March 2026 03:32:13 +0000 (0:00:00.820) 0:00:29.486 ******* 2026-03-25 03:32:29.700600 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:32:29.700620 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:32:29.700636 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:32:29.700651 | orchestrator | 2026-03-25 03:32:29.700667 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-25 03:32:29.700681 | orchestrator | Wednesday 25 March 2026 03:32:16 +0000 (0:00:03.878) 0:00:33.364 ******* 2026-03-25 03:32:29.700697 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-25 03:32:29.700712 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-25 03:32:29.700727 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-25 03:32:29.700744 | orchestrator | 2026-03-25 03:32:29.700759 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-25 03:32:29.700776 | orchestrator | Wednesday 25 March 2026 03:32:18 +0000 (0:00:01.616) 0:00:34.981 ******* 2026-03-25 03:32:29.700792 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-25 03:32:29.700806 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-25 03:32:29.700815 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-25 03:32:29.700824 | orchestrator | 2026-03-25 03:32:29.700832 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-25 03:32:29.700841 | orchestrator | Wednesday 25 March 2026 03:32:19 +0000 (0:00:01.369) 0:00:36.350 ******* 2026-03-25 03:32:29.700850 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:32:29.700859 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:32:29.700868 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:32:29.700876 | orchestrator | 2026-03-25 03:32:29.700885 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-25 03:32:29.700893 | orchestrator | Wednesday 25 March 2026 03:32:20 +0000 (0:00:00.687) 0:00:37.038 ******* 2026-03-25 03:32:29.700915 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:32:29.700925 | orchestrator | 2026-03-25 03:32:29.700934 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-25 03:32:29.700942 | orchestrator | Wednesday 25 March 2026 03:32:20 +0000 (0:00:00.133) 0:00:37.171 ******* 2026-03-25 03:32:29.700953 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:32:29.700971 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:32:29.700994 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:32:29.701007 | orchestrator | 2026-03-25 03:32:29.701022 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-25 03:32:29.701038 | orchestrator | Wednesday 25 March 2026 03:32:21 +0000 (0:00:00.324) 0:00:37.495 ******* 2026-03-25 03:32:29.701049 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:32:29.701058 | orchestrator | 2026-03-25 03:32:29.701067 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-25 03:32:29.701092 | orchestrator | Wednesday 25 March 2026 03:32:21 +0000 (0:00:00.826) 0:00:38.322 ******* 2026-03-25 03:32:29.701107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:32:29.701163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:32:29.701210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:32:29.701230 | orchestrator | 2026-03-25 03:32:29.701239 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-25 03:32:29.701247 | orchestrator | Wednesday 25 March 2026 03:32:26 +0000 (0:00:04.213) 0:00:42.535 ******* 2026-03-25 03:32:29.701265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 03:32:33.943631 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:32:33.943782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 03:32:33.943842 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:32:33.943866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 03:32:33.943883 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:32:33.943899 | orchestrator | 2026-03-25 03:32:33.943916 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-25 03:32:33.943933 | orchestrator | Wednesday 25 March 2026 03:32:29 +0000 (0:00:03.520) 0:00:46.056 ******* 2026-03-25 03:32:33.943985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 03:32:33.944020 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:32:33.944032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 03:32:33.944043 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:32:33.944062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 03:33:13.525559 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:33:13.525670 | orchestrator | 2026-03-25 03:33:13.525707 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-25 03:33:13.525718 | orchestrator | Wednesday 25 March 2026 03:32:33 +0000 (0:00:04.238) 0:00:50.294 ******* 2026-03-25 03:33:13.525727 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:33:13.525735 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:33:13.525743 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:33:13.525750 | orchestrator | 2026-03-25 03:33:13.525758 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-25 03:33:13.525766 | orchestrator | Wednesday 25 March 2026 03:32:37 +0000 (0:00:03.677) 0:00:53.972 ******* 2026-03-25 03:33:13.525791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:33:13.525803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:33:13.525844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:33:13.525853 | orchestrator | 2026-03-25 03:33:13.525860 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-25 03:33:13.525867 | orchestrator | Wednesday 25 March 2026 03:32:41 +0000 (0:00:04.192) 0:00:58.164 ******* 2026-03-25 03:33:13.525875 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:33:13.525882 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:33:13.525889 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:33:13.525896 | orchestrator | 2026-03-25 03:33:13.525902 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-25 03:33:13.525909 | orchestrator | Wednesday 25 March 2026 03:32:48 +0000 (0:00:06.320) 0:01:04.485 ******* 2026-03-25 03:33:13.525916 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:33:13.525924 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:33:13.525931 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:33:13.525938 | orchestrator | 2026-03-25 03:33:13.525945 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-25 03:33:13.525952 | orchestrator | Wednesday 25 March 2026 03:32:52 +0000 (0:00:04.265) 0:01:08.751 ******* 2026-03-25 03:33:13.525959 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:33:13.525966 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:33:13.525973 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:33:13.525981 | orchestrator | 2026-03-25 03:33:13.525988 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-25 03:33:13.525996 | orchestrator | Wednesday 25 March 2026 03:32:56 +0000 (0:00:03.891) 0:01:12.643 ******* 2026-03-25 03:33:13.526003 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:33:13.526011 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:33:13.526140 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:33:13.526172 | orchestrator | 2026-03-25 03:33:13.526180 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-25 03:33:13.526187 | orchestrator | Wednesday 25 March 2026 03:33:00 +0000 (0:00:03.767) 0:01:16.411 ******* 2026-03-25 03:33:13.526194 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:33:13.526200 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:33:13.526207 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:33:13.526214 | orchestrator | 2026-03-25 03:33:13.526221 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-25 03:33:13.526237 | orchestrator | Wednesday 25 March 2026 03:33:03 +0000 (0:00:03.904) 0:01:20.315 ******* 2026-03-25 03:33:13.526244 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:33:13.526251 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:33:13.526258 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:33:13.526265 | orchestrator | 2026-03-25 03:33:13.526273 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-25 03:33:13.526280 | orchestrator | Wednesday 25 March 2026 03:33:04 +0000 (0:00:00.614) 0:01:20.930 ******* 2026-03-25 03:33:13.526287 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-25 03:33:13.526296 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:33:13.526303 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-25 03:33:13.526311 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:33:13.526319 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-25 03:33:13.526327 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:33:13.526334 | orchestrator | 2026-03-25 03:33:13.526341 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-25 03:33:13.526350 | orchestrator | Wednesday 25 March 2026 03:33:08 +0000 (0:00:04.096) 0:01:25.027 ******* 2026-03-25 03:33:13.526357 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:33:13.526364 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:33:13.526371 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:33:13.526379 | orchestrator | 2026-03-25 03:33:13.526386 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-25 03:33:13.526404 | orchestrator | Wednesday 25 March 2026 03:33:13 +0000 (0:00:04.848) 0:01:29.875 ******* 2026-03-25 03:34:27.852567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:34:27.852665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:34:27.852748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 03:34:27.852759 | orchestrator | 2026-03-25 03:34:27.852767 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-25 03:34:27.852774 | orchestrator | Wednesday 25 March 2026 03:33:17 +0000 (0:00:04.028) 0:01:33.904 ******* 2026-03-25 03:34:27.852781 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:34:27.852788 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:34:27.852794 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:34:27.852800 | orchestrator | 2026-03-25 03:34:27.852807 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-25 03:34:27.852813 | orchestrator | Wednesday 25 March 2026 03:33:18 +0000 (0:00:00.566) 0:01:34.471 ******* 2026-03-25 03:34:27.852819 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:34:27.852825 | orchestrator | 2026-03-25 03:34:27.852831 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-25 03:34:27.852838 | orchestrator | Wednesday 25 March 2026 03:33:20 +0000 (0:00:02.174) 0:01:36.645 ******* 2026-03-25 03:34:27.852844 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:34:27.852855 | orchestrator | 2026-03-25 03:34:27.852861 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-25 03:34:27.852867 | orchestrator | Wednesday 25 March 2026 03:33:22 +0000 (0:00:02.196) 0:01:38.842 ******* 2026-03-25 03:34:27.852873 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:34:27.852879 | orchestrator | 2026-03-25 03:34:27.852885 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-25 03:34:27.852891 | orchestrator | Wednesday 25 March 2026 03:33:24 +0000 (0:00:01.911) 0:01:40.753 ******* 2026-03-25 03:34:27.852898 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:34:27.852905 | orchestrator | 2026-03-25 03:34:27.852911 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-25 03:34:27.852917 | orchestrator | Wednesday 25 March 2026 03:33:51 +0000 (0:00:26.646) 0:02:07.400 ******* 2026-03-25 03:34:27.852924 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:34:27.852930 | orchestrator | 2026-03-25 03:34:27.852937 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-25 03:34:27.852943 | orchestrator | Wednesday 25 March 2026 03:33:53 +0000 (0:00:02.004) 0:02:09.404 ******* 2026-03-25 03:34:27.852949 | orchestrator | 2026-03-25 03:34:27.852955 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-25 03:34:27.852961 | orchestrator | Wednesday 25 March 2026 03:33:53 +0000 (0:00:00.076) 0:02:09.481 ******* 2026-03-25 03:34:27.852967 | orchestrator | 2026-03-25 03:34:27.852973 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-25 03:34:27.852979 | orchestrator | Wednesday 25 March 2026 03:33:53 +0000 (0:00:00.082) 0:02:09.564 ******* 2026-03-25 03:34:27.852986 | orchestrator | 2026-03-25 03:34:27.852992 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-25 03:34:27.852998 | orchestrator | Wednesday 25 March 2026 03:33:53 +0000 (0:00:00.083) 0:02:09.647 ******* 2026-03-25 03:34:27.853005 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:34:27.853011 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:34:27.853017 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:34:27.853023 | orchestrator | 2026-03-25 03:34:27.853029 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:34:27.853037 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-25 03:34:27.853045 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-25 03:34:27.853051 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-25 03:34:27.853058 | orchestrator | 2026-03-25 03:34:27.853064 | orchestrator | 2026-03-25 03:34:27.853070 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:34:27.853076 | orchestrator | Wednesday 25 March 2026 03:34:27 +0000 (0:00:34.553) 0:02:44.201 ******* 2026-03-25 03:34:27.853082 | orchestrator | =============================================================================== 2026-03-25 03:34:27.853089 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.55s 2026-03-25 03:34:27.853110 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.65s 2026-03-25 03:34:27.853116 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.32s 2026-03-25 03:34:27.853129 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.03s 2026-03-25 03:34:28.256248 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.85s 2026-03-25 03:34:28.256321 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.27s 2026-03-25 03:34:28.256326 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.24s 2026-03-25 03:34:28.256331 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.21s 2026-03-25 03:34:28.256366 | orchestrator | glance : Copying over config.json files for services -------------------- 4.19s 2026-03-25 03:34:28.256370 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.10s 2026-03-25 03:34:28.256374 | orchestrator | glance : Check glance containers ---------------------------------------- 4.03s 2026-03-25 03:34:28.256377 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.90s 2026-03-25 03:34:28.256381 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.89s 2026-03-25 03:34:28.256385 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.88s 2026-03-25 03:34:28.256389 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.85s 2026-03-25 03:34:28.256393 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.82s 2026-03-25 03:34:28.256396 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.77s 2026-03-25 03:34:28.256400 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.71s 2026-03-25 03:34:28.256404 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.68s 2026-03-25 03:34:28.256408 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.52s 2026-03-25 03:34:31.115639 | orchestrator | 2026-03-25 03:34:31 | INFO  | Task 3c2fbf40-cd18-4a1b-8e74-e919f5958884 (cinder) was prepared for execution. 2026-03-25 03:34:31.115760 | orchestrator | 2026-03-25 03:34:31 | INFO  | It takes a moment until task 3c2fbf40-cd18-4a1b-8e74-e919f5958884 (cinder) has been started and output is visible here. 2026-03-25 03:35:05.506333 | orchestrator | 2026-03-25 03:35:05.506428 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:35:05.506441 | orchestrator | 2026-03-25 03:35:05.506448 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:35:05.506456 | orchestrator | Wednesday 25 March 2026 03:34:35 +0000 (0:00:00.325) 0:00:00.325 ******* 2026-03-25 03:35:05.506462 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:35:05.506470 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:35:05.506477 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:35:05.506483 | orchestrator | 2026-03-25 03:35:05.506489 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:35:05.506496 | orchestrator | Wednesday 25 March 2026 03:34:36 +0000 (0:00:00.336) 0:00:00.661 ******* 2026-03-25 03:35:05.506502 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-25 03:35:05.506508 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-25 03:35:05.506514 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-25 03:35:05.506520 | orchestrator | 2026-03-25 03:35:05.506525 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-25 03:35:05.506532 | orchestrator | 2026-03-25 03:35:05.506539 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-25 03:35:05.506545 | orchestrator | Wednesday 25 March 2026 03:34:36 +0000 (0:00:00.562) 0:00:01.224 ******* 2026-03-25 03:35:05.506552 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:35:05.506559 | orchestrator | 2026-03-25 03:35:05.506566 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-25 03:35:05.506573 | orchestrator | Wednesday 25 March 2026 03:34:37 +0000 (0:00:00.653) 0:00:01.878 ******* 2026-03-25 03:35:05.506581 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-25 03:35:05.506587 | orchestrator | 2026-03-25 03:35:05.506591 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-25 03:35:05.506595 | orchestrator | Wednesday 25 March 2026 03:34:40 +0000 (0:00:03.281) 0:00:05.159 ******* 2026-03-25 03:35:05.506601 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-25 03:35:05.506627 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-25 03:35:05.506637 | orchestrator | 2026-03-25 03:35:05.506643 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-25 03:35:05.506649 | orchestrator | Wednesday 25 March 2026 03:34:46 +0000 (0:00:05.977) 0:00:11.137 ******* 2026-03-25 03:35:05.506655 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:35:05.506661 | orchestrator | 2026-03-25 03:35:05.506668 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-25 03:35:05.506674 | orchestrator | Wednesday 25 March 2026 03:34:49 +0000 (0:00:03.051) 0:00:14.188 ******* 2026-03-25 03:35:05.506680 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:35:05.506688 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-25 03:35:05.506693 | orchestrator | 2026-03-25 03:35:05.506697 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-25 03:35:05.506701 | orchestrator | Wednesday 25 March 2026 03:34:53 +0000 (0:00:03.854) 0:00:18.043 ******* 2026-03-25 03:35:05.506705 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:35:05.506709 | orchestrator | 2026-03-25 03:35:05.506713 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-25 03:35:05.506717 | orchestrator | Wednesday 25 March 2026 03:34:56 +0000 (0:00:03.058) 0:00:21.101 ******* 2026-03-25 03:35:05.506721 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-25 03:35:05.506725 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-25 03:35:05.506729 | orchestrator | 2026-03-25 03:35:05.506733 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-25 03:35:05.506748 | orchestrator | Wednesday 25 March 2026 03:35:03 +0000 (0:00:06.790) 0:00:27.892 ******* 2026-03-25 03:35:05.506755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:05.506775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:05.506780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:05.506790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:05.506795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:05.506802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:05.506807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:05.506815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:11.479255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:11.479373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:11.479382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:11.479397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:11.479402 | orchestrator | 2026-03-25 03:35:11.479407 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-25 03:35:11.479412 | orchestrator | Wednesday 25 March 2026 03:35:05 +0000 (0:00:02.028) 0:00:29.920 ******* 2026-03-25 03:35:11.479417 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:35:11.479422 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:35:11.479426 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:35:11.479429 | orchestrator | 2026-03-25 03:35:11.479433 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-25 03:35:11.479437 | orchestrator | Wednesday 25 March 2026 03:35:06 +0000 (0:00:00.548) 0:00:30.469 ******* 2026-03-25 03:35:11.479442 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:35:11.479446 | orchestrator | 2026-03-25 03:35:11.479449 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-25 03:35:11.479453 | orchestrator | Wednesday 25 March 2026 03:35:06 +0000 (0:00:00.602) 0:00:31.072 ******* 2026-03-25 03:35:11.479458 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-25 03:35:11.479467 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-25 03:35:11.479471 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-25 03:35:11.479475 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-25 03:35:11.479478 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-25 03:35:11.479482 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-25 03:35:11.479486 | orchestrator | 2026-03-25 03:35:11.479490 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-25 03:35:11.479493 | orchestrator | Wednesday 25 March 2026 03:35:08 +0000 (0:00:01.719) 0:00:32.791 ******* 2026-03-25 03:35:11.479509 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-25 03:35:11.479515 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-25 03:35:11.479522 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-25 03:35:11.479527 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-25 03:35:11.479538 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-25 03:35:22.428755 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-25 03:35:22.428863 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-25 03:35:22.428891 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-25 03:35:22.428898 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-25 03:35:22.428925 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-25 03:35:22.428949 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-25 03:35:22.428956 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-25 03:35:22.428963 | orchestrator | 2026-03-25 03:35:22.428970 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-25 03:35:22.428979 | orchestrator | Wednesday 25 March 2026 03:35:11 +0000 (0:00:03.369) 0:00:36.161 ******* 2026-03-25 03:35:22.428985 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-25 03:35:22.428993 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-25 03:35:22.428999 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-25 03:35:22.429005 | orchestrator | 2026-03-25 03:35:22.429012 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-25 03:35:22.429018 | orchestrator | Wednesday 25 March 2026 03:35:13 +0000 (0:00:01.565) 0:00:37.726 ******* 2026-03-25 03:35:22.429025 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-25 03:35:22.429031 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-25 03:35:22.429042 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-25 03:35:22.429048 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-25 03:35:22.429097 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-25 03:35:22.429104 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-25 03:35:22.429110 | orchestrator | 2026-03-25 03:35:22.429116 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-25 03:35:22.429132 | orchestrator | Wednesday 25 March 2026 03:35:16 +0000 (0:00:02.672) 0:00:40.398 ******* 2026-03-25 03:35:22.429143 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-25 03:35:22.429153 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-25 03:35:22.429162 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-25 03:35:22.429172 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-25 03:35:22.429188 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-25 03:35:22.429200 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-25 03:35:22.429211 | orchestrator | 2026-03-25 03:35:22.429222 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-25 03:35:22.429232 | orchestrator | Wednesday 25 March 2026 03:35:17 +0000 (0:00:01.045) 0:00:41.444 ******* 2026-03-25 03:35:22.429242 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:35:22.429253 | orchestrator | 2026-03-25 03:35:22.429326 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-25 03:35:22.429338 | orchestrator | Wednesday 25 March 2026 03:35:17 +0000 (0:00:00.134) 0:00:41.578 ******* 2026-03-25 03:35:22.429346 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:35:22.429353 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:35:22.429360 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:35:22.429367 | orchestrator | 2026-03-25 03:35:22.429374 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-25 03:35:22.429380 | orchestrator | Wednesday 25 March 2026 03:35:17 +0000 (0:00:00.599) 0:00:42.178 ******* 2026-03-25 03:35:22.429388 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:35:22.429396 | orchestrator | 2026-03-25 03:35:22.429402 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-25 03:35:22.429408 | orchestrator | Wednesday 25 March 2026 03:35:18 +0000 (0:00:00.656) 0:00:42.834 ******* 2026-03-25 03:35:22.429424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:23.426954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:23.427167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:23.427222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:23.427237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:23.427249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:23.427282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:23.427295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:23.427320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:23.427332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:23.427344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:23.427355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:23.427367 | orchestrator | 2026-03-25 03:35:23.427380 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-25 03:35:23.427392 | orchestrator | Wednesday 25 March 2026 03:35:22 +0000 (0:00:04.031) 0:00:46.865 ******* 2026-03-25 03:35:23.427413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 03:35:23.547160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:35:23.547323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 03:35:23.547354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 03:35:23.547377 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:35:23.547401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 03:35:23.547423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:35:23.547472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 03:35:23.547539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 03:35:23.547560 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:35:23.547580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 03:35:23.547600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:35:23.547618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 03:35:23.547636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 03:35:23.547666 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:35:23.547685 | orchestrator | 2026-03-25 03:35:23.547704 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-25 03:35:23.547739 | orchestrator | Wednesday 25 March 2026 03:35:23 +0000 (0:00:01.002) 0:00:47.867 ******* 2026-03-25 03:35:24.173677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 03:35:24.173757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:35:24.173766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 03:35:24.173772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 03:35:24.173776 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:35:24.173782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 03:35:24.173822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:35:24.173832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 03:35:24.173836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 03:35:24.173840 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:35:24.173844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 03:35:24.173848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:35:24.173859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 03:35:28.660552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 03:35:28.660630 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:35:28.660637 | orchestrator | 2026-03-25 03:35:28.660643 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-25 03:35:28.660649 | orchestrator | Wednesday 25 March 2026 03:35:24 +0000 (0:00:00.968) 0:00:48.835 ******* 2026-03-25 03:35:28.660654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:28.660660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:28.660664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:28.660695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:28.660707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:28.660711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:28.660716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:28.660721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:28.660729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:28.660737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:42.378495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:42.378581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:42.378589 | orchestrator | 2026-03-25 03:35:42.378595 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-25 03:35:42.378601 | orchestrator | Wednesday 25 March 2026 03:35:28 +0000 (0:00:04.248) 0:00:53.083 ******* 2026-03-25 03:35:42.378605 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-25 03:35:42.378610 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-25 03:35:42.378614 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-25 03:35:42.378618 | orchestrator | 2026-03-25 03:35:42.378622 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-25 03:35:42.378626 | orchestrator | Wednesday 25 March 2026 03:35:30 +0000 (0:00:01.931) 0:00:55.015 ******* 2026-03-25 03:35:42.378631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:42.378652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:42.378673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:42.378678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:42.378683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:42.378687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:42.378695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:42.378700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:42.378710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:44.927392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:44.927507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:44.927551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:44.927566 | orchestrator | 2026-03-25 03:35:44.927590 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-25 03:35:44.927604 | orchestrator | Wednesday 25 March 2026 03:35:42 +0000 (0:00:11.776) 0:01:06.791 ******* 2026-03-25 03:35:44.927615 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:35:44.927627 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:35:44.927638 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:35:44.927649 | orchestrator | 2026-03-25 03:35:44.927662 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-25 03:35:44.927673 | orchestrator | Wednesday 25 March 2026 03:35:44 +0000 (0:00:01.562) 0:01:08.354 ******* 2026-03-25 03:35:44.927685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 03:35:44.927714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:35:44.927749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 03:35:44.927762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 03:35:44.927784 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:35:44.927796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 03:35:44.927807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:35:44.927818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 03:35:44.927845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 03:35:48.521678 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:35:48.521786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-25 03:35:48.521841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:35:48.521854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 03:35:48.521865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 03:35:48.521875 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:35:48.521885 | orchestrator | 2026-03-25 03:35:48.521894 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-25 03:35:48.521905 | orchestrator | Wednesday 25 March 2026 03:35:45 +0000 (0:00:00.994) 0:01:09.349 ******* 2026-03-25 03:35:48.521914 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:35:48.521923 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:35:48.521935 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:35:48.521944 | orchestrator | 2026-03-25 03:35:48.521954 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-25 03:35:48.521963 | orchestrator | Wednesday 25 March 2026 03:35:45 +0000 (0:00:00.662) 0:01:10.011 ******* 2026-03-25 03:35:48.522008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:48.522117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:48.522131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-25 03:35:48.522141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:48.522152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:48.522167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:35:48.522197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:37:14.595361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:37:14.595474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-25 03:37:14.595487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:37:14.595495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:37:14.595516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-25 03:37:14.595547 | orchestrator | 2026-03-25 03:37:14.595553 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-25 03:37:14.595559 | orchestrator | Wednesday 25 March 2026 03:35:48 +0000 (0:00:02.933) 0:01:12.944 ******* 2026-03-25 03:37:14.595563 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:37:14.595568 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:37:14.595571 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:37:14.595575 | orchestrator | 2026-03-25 03:37:14.595579 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-25 03:37:14.595583 | orchestrator | Wednesday 25 March 2026 03:35:48 +0000 (0:00:00.339) 0:01:13.284 ******* 2026-03-25 03:37:14.595587 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:37:14.595591 | orchestrator | 2026-03-25 03:37:14.595607 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-25 03:37:14.595611 | orchestrator | Wednesday 25 March 2026 03:35:51 +0000 (0:00:02.080) 0:01:15.364 ******* 2026-03-25 03:37:14.595615 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:37:14.595619 | orchestrator | 2026-03-25 03:37:14.595622 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-25 03:37:14.595626 | orchestrator | Wednesday 25 March 2026 03:35:53 +0000 (0:00:02.148) 0:01:17.513 ******* 2026-03-25 03:37:14.595630 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:37:14.595634 | orchestrator | 2026-03-25 03:37:14.595637 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-25 03:37:14.595641 | orchestrator | Wednesday 25 March 2026 03:36:11 +0000 (0:00:18.272) 0:01:35.785 ******* 2026-03-25 03:37:14.595645 | orchestrator | 2026-03-25 03:37:14.595649 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-25 03:37:14.595653 | orchestrator | Wednesday 25 March 2026 03:36:11 +0000 (0:00:00.081) 0:01:35.867 ******* 2026-03-25 03:37:14.595656 | orchestrator | 2026-03-25 03:37:14.595660 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-25 03:37:14.595664 | orchestrator | Wednesday 25 March 2026 03:36:11 +0000 (0:00:00.078) 0:01:35.945 ******* 2026-03-25 03:37:14.595668 | orchestrator | 2026-03-25 03:37:14.595672 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-25 03:37:14.595675 | orchestrator | Wednesday 25 March 2026 03:36:11 +0000 (0:00:00.080) 0:01:36.026 ******* 2026-03-25 03:37:14.595679 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:37:14.595683 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:37:14.595687 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:37:14.595690 | orchestrator | 2026-03-25 03:37:14.595694 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-25 03:37:14.595698 | orchestrator | Wednesday 25 March 2026 03:36:35 +0000 (0:00:24.240) 0:02:00.266 ******* 2026-03-25 03:37:14.595702 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:37:14.595705 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:37:14.595709 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:37:14.595713 | orchestrator | 2026-03-25 03:37:14.595717 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-25 03:37:14.595720 | orchestrator | Wednesday 25 March 2026 03:36:46 +0000 (0:00:10.614) 0:02:10.881 ******* 2026-03-25 03:37:14.595724 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:37:14.595728 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:37:14.595732 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:37:14.595735 | orchestrator | 2026-03-25 03:37:14.595739 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-25 03:37:14.595747 | orchestrator | Wednesday 25 March 2026 03:37:08 +0000 (0:00:21.695) 0:02:32.576 ******* 2026-03-25 03:37:14.595751 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:37:14.595755 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:37:14.595759 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:37:14.595763 | orchestrator | 2026-03-25 03:37:14.595767 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-25 03:37:14.595771 | orchestrator | Wednesday 25 March 2026 03:37:14 +0000 (0:00:06.025) 0:02:38.602 ******* 2026-03-25 03:37:14.595775 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:37:14.595779 | orchestrator | 2026-03-25 03:37:14.595783 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:37:14.595788 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-25 03:37:14.595792 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 03:37:14.595796 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 03:37:14.595800 | orchestrator | 2026-03-25 03:37:14.595804 | orchestrator | 2026-03-25 03:37:14.595808 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:37:14.595812 | orchestrator | Wednesday 25 March 2026 03:37:14 +0000 (0:00:00.300) 0:02:38.902 ******* 2026-03-25 03:37:14.595815 | orchestrator | =============================================================================== 2026-03-25 03:37:14.595819 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.24s 2026-03-25 03:37:14.595826 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 21.70s 2026-03-25 03:37:14.595830 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.27s 2026-03-25 03:37:14.595834 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.78s 2026-03-25 03:37:14.595838 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.61s 2026-03-25 03:37:14.595842 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.79s 2026-03-25 03:37:14.595845 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.03s 2026-03-25 03:37:14.595849 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.98s 2026-03-25 03:37:14.595853 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.25s 2026-03-25 03:37:14.595856 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.03s 2026-03-25 03:37:14.595860 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.85s 2026-03-25 03:37:14.595864 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.37s 2026-03-25 03:37:14.595868 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.28s 2026-03-25 03:37:14.595871 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.06s 2026-03-25 03:37:14.595879 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.05s 2026-03-25 03:37:15.036307 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.93s 2026-03-25 03:37:15.036395 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.67s 2026-03-25 03:37:15.036405 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.15s 2026-03-25 03:37:15.036413 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.08s 2026-03-25 03:37:15.036421 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.03s 2026-03-25 03:37:17.718313 | orchestrator | 2026-03-25 03:37:17 | INFO  | Task 7a0be810-a3b9-4cd6-8beb-e91d4c1c111a (barbican) was prepared for execution. 2026-03-25 03:37:17.718408 | orchestrator | 2026-03-25 03:37:17 | INFO  | It takes a moment until task 7a0be810-a3b9-4cd6-8beb-e91d4c1c111a (barbican) has been started and output is visible here. 2026-03-25 03:38:00.243310 | orchestrator | 2026-03-25 03:38:00.243428 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:38:00.243444 | orchestrator | 2026-03-25 03:38:00.243459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:38:00.243476 | orchestrator | Wednesday 25 March 2026 03:37:22 +0000 (0:00:00.303) 0:00:00.303 ******* 2026-03-25 03:38:00.243491 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:38:00.243507 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:38:00.243521 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:38:00.243535 | orchestrator | 2026-03-25 03:38:00.243551 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:38:00.243567 | orchestrator | Wednesday 25 March 2026 03:37:23 +0000 (0:00:00.340) 0:00:00.644 ******* 2026-03-25 03:38:00.243585 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-25 03:38:00.243601 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-25 03:38:00.243611 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-25 03:38:00.243620 | orchestrator | 2026-03-25 03:38:00.243629 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-25 03:38:00.243638 | orchestrator | 2026-03-25 03:38:00.243654 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-25 03:38:00.243668 | orchestrator | Wednesday 25 March 2026 03:37:23 +0000 (0:00:00.527) 0:00:01.171 ******* 2026-03-25 03:38:00.243683 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:38:00.243699 | orchestrator | 2026-03-25 03:38:00.243714 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-25 03:38:00.243729 | orchestrator | Wednesday 25 March 2026 03:37:24 +0000 (0:00:00.612) 0:00:01.784 ******* 2026-03-25 03:38:00.243747 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-25 03:38:00.243763 | orchestrator | 2026-03-25 03:38:00.243778 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-25 03:38:00.243792 | orchestrator | Wednesday 25 March 2026 03:37:27 +0000 (0:00:02.896) 0:00:04.681 ******* 2026-03-25 03:38:00.243807 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-25 03:38:00.243822 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-25 03:38:00.243836 | orchestrator | 2026-03-25 03:38:00.243851 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-25 03:38:00.243865 | orchestrator | Wednesday 25 March 2026 03:37:33 +0000 (0:00:06.058) 0:00:10.739 ******* 2026-03-25 03:38:00.243879 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:38:00.243894 | orchestrator | 2026-03-25 03:38:00.243908 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-25 03:38:00.243924 | orchestrator | Wednesday 25 March 2026 03:37:36 +0000 (0:00:03.014) 0:00:13.754 ******* 2026-03-25 03:38:00.243939 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:38:00.243982 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-25 03:38:00.243999 | orchestrator | 2026-03-25 03:38:00.244015 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-25 03:38:00.244052 | orchestrator | Wednesday 25 March 2026 03:37:40 +0000 (0:00:03.878) 0:00:17.633 ******* 2026-03-25 03:38:00.244068 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:38:00.244083 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-25 03:38:00.244098 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-25 03:38:00.244114 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-25 03:38:00.244130 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-25 03:38:00.244166 | orchestrator | 2026-03-25 03:38:00.244178 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-25 03:38:00.244188 | orchestrator | Wednesday 25 March 2026 03:37:54 +0000 (0:00:14.906) 0:00:32.539 ******* 2026-03-25 03:38:00.244198 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-25 03:38:00.244207 | orchestrator | 2026-03-25 03:38:00.244216 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-25 03:38:00.244224 | orchestrator | Wednesday 25 March 2026 03:37:58 +0000 (0:00:03.688) 0:00:36.227 ******* 2026-03-25 03:38:00.244237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:00.244271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:00.244281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:00.244297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:00.244317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:00.244327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:00.244345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:06.134544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:06.134672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:06.134690 | orchestrator | 2026-03-25 03:38:06.134703 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-25 03:38:06.134716 | orchestrator | Wednesday 25 March 2026 03:38:00 +0000 (0:00:01.583) 0:00:37.812 ******* 2026-03-25 03:38:06.134727 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-25 03:38:06.134737 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-25 03:38:06.134747 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-25 03:38:06.134757 | orchestrator | 2026-03-25 03:38:06.134767 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-25 03:38:06.134805 | orchestrator | Wednesday 25 March 2026 03:38:01 +0000 (0:00:01.197) 0:00:39.009 ******* 2026-03-25 03:38:06.134818 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:38:06.134829 | orchestrator | 2026-03-25 03:38:06.134839 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-25 03:38:06.134850 | orchestrator | Wednesday 25 March 2026 03:38:01 +0000 (0:00:00.371) 0:00:39.381 ******* 2026-03-25 03:38:06.134861 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:38:06.134871 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:38:06.134880 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:38:06.134890 | orchestrator | 2026-03-25 03:38:06.134918 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-25 03:38:06.134931 | orchestrator | Wednesday 25 March 2026 03:38:02 +0000 (0:00:00.332) 0:00:39.713 ******* 2026-03-25 03:38:06.135041 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:38:06.135055 | orchestrator | 2026-03-25 03:38:06.135064 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-25 03:38:06.135076 | orchestrator | Wednesday 25 March 2026 03:38:02 +0000 (0:00:00.579) 0:00:40.293 ******* 2026-03-25 03:38:06.135089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:06.135129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:06.135141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:06.135166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:06.135189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:06.135201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:06.135213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:06.135233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:07.697700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:07.697819 | orchestrator | 2026-03-25 03:38:07.697832 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-25 03:38:07.697844 | orchestrator | Wednesday 25 March 2026 03:38:06 +0000 (0:00:03.403) 0:00:43.697 ******* 2026-03-25 03:38:07.697869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 03:38:07.697880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:38:07.697891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:38:07.697900 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:38:07.697911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 03:38:07.697962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:38:07.697979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:38:07.697989 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:38:07.698002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 03:38:07.698012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:38:07.698125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:38:07.698134 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:38:07.698144 | orchestrator | 2026-03-25 03:38:07.698153 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-25 03:38:07.698163 | orchestrator | Wednesday 25 March 2026 03:38:06 +0000 (0:00:00.660) 0:00:44.358 ******* 2026-03-25 03:38:07.698182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 03:38:11.159337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:38:11.159460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:38:11.159471 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:38:11.159482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 03:38:11.159491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:38:11.159527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:38:11.159554 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:38:11.159579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 03:38:11.159587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:38:11.159598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:38:11.159606 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:38:11.159613 | orchestrator | 2026-03-25 03:38:11.159621 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-25 03:38:11.159630 | orchestrator | Wednesday 25 March 2026 03:38:07 +0000 (0:00:00.915) 0:00:45.274 ******* 2026-03-25 03:38:11.159637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:11.159647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:11.159665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:21.158514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:21.158640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:21.158663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:21.158679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:21.158728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:21.158744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:21.158760 | orchestrator | 2026-03-25 03:38:21.158777 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-25 03:38:21.158794 | orchestrator | Wednesday 25 March 2026 03:38:11 +0000 (0:00:03.455) 0:00:48.729 ******* 2026-03-25 03:38:21.158810 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:38:21.158826 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:38:21.158841 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:38:21.158857 | orchestrator | 2026-03-25 03:38:21.158893 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-25 03:38:21.158909 | orchestrator | Wednesday 25 March 2026 03:38:12 +0000 (0:00:01.569) 0:00:50.299 ******* 2026-03-25 03:38:21.158924 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:38:21.158970 | orchestrator | 2026-03-25 03:38:21.158986 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-25 03:38:21.159000 | orchestrator | Wednesday 25 March 2026 03:38:13 +0000 (0:00:01.031) 0:00:51.330 ******* 2026-03-25 03:38:21.159014 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:38:21.159028 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:38:21.159043 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:38:21.159057 | orchestrator | 2026-03-25 03:38:21.159072 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-25 03:38:21.159087 | orchestrator | Wednesday 25 March 2026 03:38:14 +0000 (0:00:00.628) 0:00:51.958 ******* 2026-03-25 03:38:21.159226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:21.159258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:21.159290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:21.159320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:22.171480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:22.171559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:22.171567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:22.171587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:22.171591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:22.171595 | orchestrator | 2026-03-25 03:38:22.171600 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-25 03:38:22.171606 | orchestrator | Wednesday 25 March 2026 03:38:21 +0000 (0:00:06.770) 0:00:58.729 ******* 2026-03-25 03:38:22.171621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 03:38:22.171629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:38:22.171634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:38:22.171647 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:38:22.171652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 03:38:22.171656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:38:22.171660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:38:22.171664 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:38:22.171673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-25 03:38:24.593366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:38:24.593499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:38:24.593516 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:38:24.593529 | orchestrator | 2026-03-25 03:38:24.593540 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-25 03:38:24.593552 | orchestrator | Wednesday 25 March 2026 03:38:22 +0000 (0:00:01.010) 0:00:59.739 ******* 2026-03-25 03:38:24.593563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:24.593575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:24.593615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-25 03:38:24.593627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:24.593648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:24.593658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:24.593668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:24.593678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:24.593688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:38:24.593697 | orchestrator | 2026-03-25 03:38:24.593708 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-25 03:38:24.593732 | orchestrator | Wednesday 25 March 2026 03:38:24 +0000 (0:00:02.421) 0:01:02.161 ******* 2026-03-25 03:39:04.918003 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:39:04.918336 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:39:04.918367 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:39:04.918389 | orchestrator | 2026-03-25 03:39:04.918410 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-25 03:39:04.918433 | orchestrator | Wednesday 25 March 2026 03:38:24 +0000 (0:00:00.344) 0:01:02.505 ******* 2026-03-25 03:39:04.918447 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:39:04.918460 | orchestrator | 2026-03-25 03:39:04.918472 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-25 03:39:04.918483 | orchestrator | Wednesday 25 March 2026 03:38:26 +0000 (0:00:01.996) 0:01:04.502 ******* 2026-03-25 03:39:04.918494 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:39:04.918505 | orchestrator | 2026-03-25 03:39:04.918516 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-25 03:39:04.918528 | orchestrator | Wednesday 25 March 2026 03:38:29 +0000 (0:00:02.111) 0:01:06.613 ******* 2026-03-25 03:39:04.918539 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:39:04.918550 | orchestrator | 2026-03-25 03:39:04.918561 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-25 03:39:04.918572 | orchestrator | Wednesday 25 March 2026 03:38:41 +0000 (0:00:12.171) 0:01:18.785 ******* 2026-03-25 03:39:04.918583 | orchestrator | 2026-03-25 03:39:04.918595 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-25 03:39:04.918606 | orchestrator | Wednesday 25 March 2026 03:38:41 +0000 (0:00:00.094) 0:01:18.879 ******* 2026-03-25 03:39:04.918617 | orchestrator | 2026-03-25 03:39:04.918628 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-25 03:39:04.918640 | orchestrator | Wednesday 25 March 2026 03:38:41 +0000 (0:00:00.081) 0:01:18.961 ******* 2026-03-25 03:39:04.918651 | orchestrator | 2026-03-25 03:39:04.918661 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-25 03:39:04.918672 | orchestrator | Wednesday 25 March 2026 03:38:41 +0000 (0:00:00.083) 0:01:19.044 ******* 2026-03-25 03:39:04.918684 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:39:04.918695 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:39:04.918706 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:39:04.918717 | orchestrator | 2026-03-25 03:39:04.918728 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-25 03:39:04.918739 | orchestrator | Wednesday 25 March 2026 03:38:49 +0000 (0:00:08.008) 0:01:27.053 ******* 2026-03-25 03:39:04.918749 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:39:04.918758 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:39:04.918768 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:39:04.918778 | orchestrator | 2026-03-25 03:39:04.918788 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-25 03:39:04.918797 | orchestrator | Wednesday 25 March 2026 03:38:59 +0000 (0:00:09.664) 0:01:36.718 ******* 2026-03-25 03:39:04.918807 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:39:04.918817 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:39:04.918827 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:39:04.918836 | orchestrator | 2026-03-25 03:39:04.918846 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:39:04.918857 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 03:39:04.918869 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 03:39:04.918879 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 03:39:04.918889 | orchestrator | 2026-03-25 03:39:04.918923 | orchestrator | 2026-03-25 03:39:04.918934 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:39:04.918953 | orchestrator | Wednesday 25 March 2026 03:39:04 +0000 (0:00:05.258) 0:01:41.977 ******* 2026-03-25 03:39:04.918963 | orchestrator | =============================================================================== 2026-03-25 03:39:04.918973 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.91s 2026-03-25 03:39:04.918983 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.17s 2026-03-25 03:39:04.918992 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.66s 2026-03-25 03:39:04.919002 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.01s 2026-03-25 03:39:04.919012 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.77s 2026-03-25 03:39:04.919021 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.06s 2026-03-25 03:39:04.919031 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.26s 2026-03-25 03:39:04.919041 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.88s 2026-03-25 03:39:04.919050 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.69s 2026-03-25 03:39:04.919060 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.46s 2026-03-25 03:39:04.919069 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.40s 2026-03-25 03:39:04.919079 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.01s 2026-03-25 03:39:04.919089 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 2.90s 2026-03-25 03:39:04.919099 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.42s 2026-03-25 03:39:04.919109 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.11s 2026-03-25 03:39:04.919161 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.00s 2026-03-25 03:39:04.919172 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.58s 2026-03-25 03:39:04.919196 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.57s 2026-03-25 03:39:04.919215 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.20s 2026-03-25 03:39:04.919225 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.03s 2026-03-25 03:39:07.894258 | orchestrator | 2026-03-25 03:39:07 | INFO  | Task 7ee9ddbc-8307-4be3-90b2-dc23bfb2c0b2 (designate) was prepared for execution. 2026-03-25 03:39:07.894345 | orchestrator | 2026-03-25 03:39:07 | INFO  | It takes a moment until task 7ee9ddbc-8307-4be3-90b2-dc23bfb2c0b2 (designate) has been started and output is visible here. 2026-03-25 03:39:39.198295 | orchestrator | 2026-03-25 03:39:39.198390 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:39:39.198398 | orchestrator | 2026-03-25 03:39:39.198402 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:39:39.198408 | orchestrator | Wednesday 25 March 2026 03:39:13 +0000 (0:00:00.349) 0:00:00.349 ******* 2026-03-25 03:39:39.198412 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:39:39.198419 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:39:39.198451 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:39:39.198457 | orchestrator | 2026-03-25 03:39:39.198462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:39:39.198466 | orchestrator | Wednesday 25 March 2026 03:39:13 +0000 (0:00:00.325) 0:00:00.674 ******* 2026-03-25 03:39:39.198471 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-25 03:39:39.198477 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-25 03:39:39.198481 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-25 03:39:39.198485 | orchestrator | 2026-03-25 03:39:39.198489 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-25 03:39:39.198493 | orchestrator | 2026-03-25 03:39:39.198497 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-25 03:39:39.198517 | orchestrator | Wednesday 25 March 2026 03:39:13 +0000 (0:00:00.507) 0:00:01.182 ******* 2026-03-25 03:39:39.198522 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:39:39.198527 | orchestrator | 2026-03-25 03:39:39.198531 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-25 03:39:39.198541 | orchestrator | Wednesday 25 March 2026 03:39:14 +0000 (0:00:00.663) 0:00:01.845 ******* 2026-03-25 03:39:39.198545 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-25 03:39:39.198549 | orchestrator | 2026-03-25 03:39:39.198553 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-25 03:39:39.198557 | orchestrator | Wednesday 25 March 2026 03:39:17 +0000 (0:00:02.995) 0:00:04.840 ******* 2026-03-25 03:39:39.198560 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-25 03:39:39.198565 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-25 03:39:39.198569 | orchestrator | 2026-03-25 03:39:39.198572 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-25 03:39:39.198576 | orchestrator | Wednesday 25 March 2026 03:39:23 +0000 (0:00:05.886) 0:00:10.727 ******* 2026-03-25 03:39:39.198581 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:39:39.198585 | orchestrator | 2026-03-25 03:39:39.198589 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-25 03:39:39.198593 | orchestrator | Wednesday 25 March 2026 03:39:26 +0000 (0:00:03.020) 0:00:13.748 ******* 2026-03-25 03:39:39.198597 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:39:39.198601 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-25 03:39:39.198604 | orchestrator | 2026-03-25 03:39:39.198608 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-25 03:39:39.198612 | orchestrator | Wednesday 25 March 2026 03:39:30 +0000 (0:00:03.900) 0:00:17.648 ******* 2026-03-25 03:39:39.198616 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:39:39.198620 | orchestrator | 2026-03-25 03:39:39.198624 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-25 03:39:39.198628 | orchestrator | Wednesday 25 March 2026 03:39:33 +0000 (0:00:03.082) 0:00:20.731 ******* 2026-03-25 03:39:39.198632 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-25 03:39:39.198635 | orchestrator | 2026-03-25 03:39:39.198639 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-25 03:39:39.198643 | orchestrator | Wednesday 25 March 2026 03:39:37 +0000 (0:00:03.687) 0:00:24.418 ******* 2026-03-25 03:39:39.198661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:39:39.198681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:39:39.198691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:39:39.198696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:39:39.198703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:39:39.198707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:39:39.198714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:39.198728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:45.611292 | orchestrator | 2026-03-25 03:39:45.611302 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-25 03:39:45.611312 | orchestrator | Wednesday 25 March 2026 03:39:40 +0000 (0:00:02.873) 0:00:27.292 ******* 2026-03-25 03:39:45.611320 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:39:45.611329 | orchestrator | 2026-03-25 03:39:45.611338 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-25 03:39:45.611347 | orchestrator | Wednesday 25 March 2026 03:39:40 +0000 (0:00:00.135) 0:00:27.428 ******* 2026-03-25 03:39:45.611356 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:39:45.611365 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:39:45.611374 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:39:45.611382 | orchestrator | 2026-03-25 03:39:45.611390 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-25 03:39:45.611399 | orchestrator | Wednesday 25 March 2026 03:39:40 +0000 (0:00:00.560) 0:00:27.989 ******* 2026-03-25 03:39:45.611419 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:39:45.611425 | orchestrator | 2026-03-25 03:39:45.611430 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-25 03:39:45.611435 | orchestrator | Wednesday 25 March 2026 03:39:41 +0000 (0:00:00.674) 0:00:28.664 ******* 2026-03-25 03:39:45.611446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:39:45.611461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:39:47.277172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:39:47.277269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:47.277423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:48.245014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:48.245106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:48.245115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:48.245143 | orchestrator | 2026-03-25 03:39:48.245151 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-25 03:39:48.245160 | orchestrator | Wednesday 25 March 2026 03:39:47 +0000 (0:00:05.886) 0:00:34.551 ******* 2026-03-25 03:39:48.245181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:39:48.245190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 03:39:48.245211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:39:48.245219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:39:48.245226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:39:48.245240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:39:48.245247 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:39:48.245260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:39:48.245267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 03:39:48.245274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:39:48.245286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.151309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.151429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.151440 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:39:49.151458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:39:49.151463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 03:39:49.151468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.151472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.151490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.151504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.151508 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:39:49.151513 | orchestrator | 2026-03-25 03:39:49.151517 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-25 03:39:49.151522 | orchestrator | Wednesday 25 March 2026 03:39:48 +0000 (0:00:01.082) 0:00:35.633 ******* 2026-03-25 03:39:49.151530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:39:49.151534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 03:39:49.151538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.151545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.542288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.542372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.542387 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:39:49.542411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:39:49.542421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 03:39:49.542429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.542437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.542480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.542488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.542494 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:39:49.542506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:39:49.542514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 03:39:49.542521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.542528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:39:49.542548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:39:53.775295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:39:53.775399 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:39:53.775411 | orchestrator | 2026-03-25 03:39:53.775418 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-25 03:39:53.775427 | orchestrator | Wednesday 25 March 2026 03:39:49 +0000 (0:00:01.183) 0:00:36.817 ******* 2026-03-25 03:39:53.775451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:39:53.775461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:39:53.775466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:39:53.775511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:39:53.775520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:39:53.775532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:39:53.775539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:53.775545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:53.775556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:53.775563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:39:53.775577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:05.739957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:05.740120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:05.740148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:05.740163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:05.740206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:05.740222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:05.740259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:05.740275 | orchestrator | 2026-03-25 03:40:05.740292 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-25 03:40:05.740307 | orchestrator | Wednesday 25 March 2026 03:39:55 +0000 (0:00:05.927) 0:00:42.745 ******* 2026-03-25 03:40:05.740329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:40:05.740345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:40:05.740369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:40:05.740385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:05.740412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:15.126685 | orchestrator | 2026-03-25 03:40:15.126691 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-25 03:40:15.126698 | orchestrator | Wednesday 25 March 2026 03:40:10 +0000 (0:00:15.386) 0:00:58.131 ******* 2026-03-25 03:40:15.126707 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-25 03:40:19.689439 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-25 03:40:19.689514 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-25 03:40:19.689521 | orchestrator | 2026-03-25 03:40:19.689527 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-25 03:40:19.689532 | orchestrator | Wednesday 25 March 2026 03:40:15 +0000 (0:00:04.268) 0:01:02.400 ******* 2026-03-25 03:40:19.689537 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-25 03:40:19.689542 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-25 03:40:19.689560 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-25 03:40:19.689564 | orchestrator | 2026-03-25 03:40:19.689569 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-25 03:40:19.689589 | orchestrator | Wednesday 25 March 2026 03:40:17 +0000 (0:00:02.670) 0:01:05.071 ******* 2026-03-25 03:40:19.689596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:40:19.689604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:40:19.689609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:40:19.689626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:19.689636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:40:19.689646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:40:19.689652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:40:19.689657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:19.689662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:40:19.689667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:40:19.689677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:40:22.488913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:22.489022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:40:22.489034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:40:22.489041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:40:22.489048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:22.489055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:22.489076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:22.489090 | orchestrator | 2026-03-25 03:40:22.489097 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-25 03:40:22.489109 | orchestrator | Wednesday 25 March 2026 03:40:20 +0000 (0:00:02.915) 0:01:07.987 ******* 2026-03-25 03:40:22.489117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:40:22.489125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:40:22.489131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:40:22.489138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:22.489148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:40:23.548684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:40:23.548775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:40:23.548786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:23.548793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:40:23.548800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:23.548807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:40:23.548896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:40:23.548907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:40:23.548914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:40:23.548922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:40:23.548928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:23.548935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:23.548950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:23.548955 | orchestrator | 2026-03-25 03:40:23.548962 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-25 03:40:23.548975 | orchestrator | Wednesday 25 March 2026 03:40:23 +0000 (0:00:02.831) 0:01:10.818 ******* 2026-03-25 03:40:24.645456 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:40:24.645567 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:40:24.645603 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:40:24.645613 | orchestrator | 2026-03-25 03:40:24.645622 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-25 03:40:24.645633 | orchestrator | Wednesday 25 March 2026 03:40:23 +0000 (0:00:00.370) 0:01:11.188 ******* 2026-03-25 03:40:24.645651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:40:24.645670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 03:40:24.645685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:40:24.645697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:40:24.645733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:40:24.645777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:40:24.645790 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:40:24.645801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:40:24.645809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 03:40:24.645816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:40:24.645822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:40:24.645836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:40:24.645917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:40:28.030439 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:40:28.030588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-25 03:40:28.030623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 03:40:28.030644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 03:40:28.030667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 03:40:28.030725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 03:40:28.030738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:40:28.030750 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:40:28.030762 | orchestrator | 2026-03-25 03:40:28.030810 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-25 03:40:28.030824 | orchestrator | Wednesday 25 March 2026 03:40:24 +0000 (0:00:00.844) 0:01:12.033 ******* 2026-03-25 03:40:28.030836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:40:28.030881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:40:28.030903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-25 03:40:28.030927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:28.030954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:40:29.857958 | orchestrator | 2026-03-25 03:40:29.857963 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-25 03:40:29.857969 | orchestrator | Wednesday 25 March 2026 03:40:29 +0000 (0:00:04.766) 0:01:16.799 ******* 2026-03-25 03:40:29.857973 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:40:29.857984 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:41:54.386548 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:41:54.386654 | orchestrator | 2026-03-25 03:41:54.386665 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-25 03:41:54.386677 | orchestrator | Wednesday 25 March 2026 03:40:29 +0000 (0:00:00.331) 0:01:17.131 ******* 2026-03-25 03:41:54.386686 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-25 03:41:54.386695 | orchestrator | 2026-03-25 03:41:54.386703 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-25 03:41:54.386713 | orchestrator | Wednesday 25 March 2026 03:40:31 +0000 (0:00:01.823) 0:01:18.954 ******* 2026-03-25 03:41:54.386722 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-25 03:41:54.386730 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-25 03:41:54.386739 | orchestrator | 2026-03-25 03:41:54.386748 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-25 03:41:54.386756 | orchestrator | Wednesday 25 March 2026 03:40:33 +0000 (0:00:01.893) 0:01:20.848 ******* 2026-03-25 03:41:54.386765 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:41:54.386773 | orchestrator | 2026-03-25 03:41:54.386781 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-25 03:41:54.386790 | orchestrator | Wednesday 25 March 2026 03:40:48 +0000 (0:00:15.241) 0:01:36.090 ******* 2026-03-25 03:41:54.386864 | orchestrator | 2026-03-25 03:41:54.386875 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-25 03:41:54.386883 | orchestrator | Wednesday 25 March 2026 03:40:48 +0000 (0:00:00.084) 0:01:36.174 ******* 2026-03-25 03:41:54.386890 | orchestrator | 2026-03-25 03:41:54.386896 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-25 03:41:54.386903 | orchestrator | Wednesday 25 March 2026 03:40:48 +0000 (0:00:00.079) 0:01:36.254 ******* 2026-03-25 03:41:54.386909 | orchestrator | 2026-03-25 03:41:54.386916 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-25 03:41:54.386922 | orchestrator | Wednesday 25 March 2026 03:40:49 +0000 (0:00:00.099) 0:01:36.354 ******* 2026-03-25 03:41:54.386929 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:41:54.386935 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:41:54.386941 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:41:54.386947 | orchestrator | 2026-03-25 03:41:54.386956 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-25 03:41:54.386964 | orchestrator | Wednesday 25 March 2026 03:40:56 +0000 (0:00:07.868) 0:01:44.222 ******* 2026-03-25 03:41:54.386973 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:41:54.386982 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:41:54.386991 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:41:54.387000 | orchestrator | 2026-03-25 03:41:54.387009 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-25 03:41:54.387016 | orchestrator | Wednesday 25 March 2026 03:41:07 +0000 (0:00:10.921) 0:01:55.144 ******* 2026-03-25 03:41:54.387022 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:41:54.387029 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:41:54.387038 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:41:54.387046 | orchestrator | 2026-03-25 03:41:54.387055 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-25 03:41:54.387064 | orchestrator | Wednesday 25 March 2026 03:41:18 +0000 (0:00:10.658) 0:02:05.803 ******* 2026-03-25 03:41:54.387073 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:41:54.387081 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:41:54.387090 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:41:54.387099 | orchestrator | 2026-03-25 03:41:54.387109 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-25 03:41:54.387118 | orchestrator | Wednesday 25 March 2026 03:41:29 +0000 (0:00:11.124) 0:02:16.928 ******* 2026-03-25 03:41:54.387127 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:41:54.387136 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:41:54.387146 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:41:54.387155 | orchestrator | 2026-03-25 03:41:54.387164 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-25 03:41:54.387173 | orchestrator | Wednesday 25 March 2026 03:41:40 +0000 (0:00:10.974) 0:02:27.902 ******* 2026-03-25 03:41:54.387182 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:41:54.387191 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:41:54.387201 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:41:54.387210 | orchestrator | 2026-03-25 03:41:54.387219 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-25 03:41:54.387229 | orchestrator | Wednesday 25 March 2026 03:41:46 +0000 (0:00:06.172) 0:02:34.075 ******* 2026-03-25 03:41:54.387238 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:41:54.387247 | orchestrator | 2026-03-25 03:41:54.387256 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:41:54.387267 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 03:41:54.387278 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 03:41:54.387288 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 03:41:54.387307 | orchestrator | 2026-03-25 03:41:54.387316 | orchestrator | 2026-03-25 03:41:54.387326 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:41:54.387334 | orchestrator | Wednesday 25 March 2026 03:41:53 +0000 (0:00:07.068) 0:02:41.144 ******* 2026-03-25 03:41:54.387343 | orchestrator | =============================================================================== 2026-03-25 03:41:54.387353 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.39s 2026-03-25 03:41:54.387362 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.24s 2026-03-25 03:41:54.387405 | orchestrator | designate : Restart designate-producer container ----------------------- 11.12s 2026-03-25 03:41:54.387415 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.97s 2026-03-25 03:41:54.387423 | orchestrator | designate : Restart designate-api container ---------------------------- 10.92s 2026-03-25 03:41:54.387433 | orchestrator | designate : Restart designate-central container ------------------------ 10.66s 2026-03-25 03:41:54.387442 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.87s 2026-03-25 03:41:54.387451 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.07s 2026-03-25 03:41:54.387460 | orchestrator | designate : Restart designate-worker container -------------------------- 6.17s 2026-03-25 03:41:54.387469 | orchestrator | designate : Copying over config.json files for services ----------------- 5.93s 2026-03-25 03:41:54.387478 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.89s 2026-03-25 03:41:54.387486 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 5.89s 2026-03-25 03:41:54.387495 | orchestrator | designate : Check designate containers ---------------------------------- 4.77s 2026-03-25 03:41:54.387504 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.27s 2026-03-25 03:41:54.387512 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.90s 2026-03-25 03:41:54.387521 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.69s 2026-03-25 03:41:54.387529 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.08s 2026-03-25 03:41:54.387538 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.02s 2026-03-25 03:41:54.387546 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.00s 2026-03-25 03:41:54.387554 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.92s 2026-03-25 03:41:57.235728 | orchestrator | 2026-03-25 03:41:57 | INFO  | Task 961a04d5-3c95-4920-8c64-f4559e5a6a62 (octavia) was prepared for execution. 2026-03-25 03:41:57.235934 | orchestrator | 2026-03-25 03:41:57 | INFO  | It takes a moment until task 961a04d5-3c95-4920-8c64-f4559e5a6a62 (octavia) has been started and output is visible here. 2026-03-25 03:43:57.471411 | orchestrator | 2026-03-25 03:43:57.471588 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:43:57.471618 | orchestrator | 2026-03-25 03:43:57.471635 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:43:57.471652 | orchestrator | Wednesday 25 March 2026 03:42:02 +0000 (0:00:00.314) 0:00:00.314 ******* 2026-03-25 03:43:57.471669 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:43:57.471686 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:43:57.471702 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:43:57.471717 | orchestrator | 2026-03-25 03:43:57.471734 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:43:57.471786 | orchestrator | Wednesday 25 March 2026 03:42:02 +0000 (0:00:00.356) 0:00:00.670 ******* 2026-03-25 03:43:57.471808 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-25 03:43:57.471829 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-25 03:43:57.471849 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-25 03:43:57.471911 | orchestrator | 2026-03-25 03:43:57.471932 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-25 03:43:57.471950 | orchestrator | 2026-03-25 03:43:57.471968 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-25 03:43:57.471987 | orchestrator | Wednesday 25 March 2026 03:42:03 +0000 (0:00:00.495) 0:00:01.166 ******* 2026-03-25 03:43:57.472008 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:43:57.472027 | orchestrator | 2026-03-25 03:43:57.472046 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-25 03:43:57.472065 | orchestrator | Wednesday 25 March 2026 03:42:03 +0000 (0:00:00.632) 0:00:01.798 ******* 2026-03-25 03:43:57.472084 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-25 03:43:57.472105 | orchestrator | 2026-03-25 03:43:57.472125 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-25 03:43:57.472145 | orchestrator | Wednesday 25 March 2026 03:42:06 +0000 (0:00:03.201) 0:00:05.000 ******* 2026-03-25 03:43:57.472165 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-25 03:43:57.472186 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-25 03:43:57.472206 | orchestrator | 2026-03-25 03:43:57.472224 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-25 03:43:57.472242 | orchestrator | Wednesday 25 March 2026 03:42:12 +0000 (0:00:06.057) 0:00:11.057 ******* 2026-03-25 03:43:57.472262 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:43:57.472283 | orchestrator | 2026-03-25 03:43:57.472303 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-25 03:43:57.472323 | orchestrator | Wednesday 25 March 2026 03:42:16 +0000 (0:00:03.038) 0:00:14.096 ******* 2026-03-25 03:43:57.472343 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:43:57.472361 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-25 03:43:57.472381 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-25 03:43:57.472399 | orchestrator | 2026-03-25 03:43:57.472418 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-25 03:43:57.472461 | orchestrator | Wednesday 25 March 2026 03:42:23 +0000 (0:00:07.880) 0:00:21.976 ******* 2026-03-25 03:43:57.472483 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:43:57.472501 | orchestrator | 2026-03-25 03:43:57.472519 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-25 03:43:57.472536 | orchestrator | Wednesday 25 March 2026 03:42:26 +0000 (0:00:02.998) 0:00:24.974 ******* 2026-03-25 03:43:57.472554 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-25 03:43:57.472572 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-25 03:43:57.472591 | orchestrator | 2026-03-25 03:43:57.472609 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-25 03:43:57.472628 | orchestrator | Wednesday 25 March 2026 03:42:33 +0000 (0:00:06.718) 0:00:31.693 ******* 2026-03-25 03:43:57.472648 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-25 03:43:57.472668 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-25 03:43:57.472688 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-25 03:43:57.472708 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-25 03:43:57.472726 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-25 03:43:57.472745 | orchestrator | 2026-03-25 03:43:57.472795 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-25 03:43:57.472816 | orchestrator | Wednesday 25 March 2026 03:42:48 +0000 (0:00:14.582) 0:00:46.275 ******* 2026-03-25 03:43:57.472854 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:43:57.472876 | orchestrator | 2026-03-25 03:43:57.472896 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-25 03:43:57.472916 | orchestrator | Wednesday 25 March 2026 03:42:49 +0000 (0:00:00.877) 0:00:47.153 ******* 2026-03-25 03:43:57.472935 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.472954 | orchestrator | 2026-03-25 03:43:57.472974 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-25 03:43:57.472993 | orchestrator | Wednesday 25 March 2026 03:42:53 +0000 (0:00:04.779) 0:00:51.932 ******* 2026-03-25 03:43:57.473011 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.473030 | orchestrator | 2026-03-25 03:43:57.473047 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-25 03:43:57.473098 | orchestrator | Wednesday 25 March 2026 03:42:58 +0000 (0:00:04.486) 0:00:56.419 ******* 2026-03-25 03:43:57.473117 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:43:57.473129 | orchestrator | 2026-03-25 03:43:57.473139 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-25 03:43:57.473150 | orchestrator | Wednesday 25 March 2026 03:43:01 +0000 (0:00:03.003) 0:00:59.423 ******* 2026-03-25 03:43:57.473160 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-25 03:43:57.473171 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-25 03:43:57.473182 | orchestrator | 2026-03-25 03:43:57.473193 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-25 03:43:57.473203 | orchestrator | Wednesday 25 March 2026 03:43:11 +0000 (0:00:09.827) 0:01:09.250 ******* 2026-03-25 03:43:57.473214 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-25 03:43:57.473225 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-25 03:43:57.473239 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-25 03:43:57.473251 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-25 03:43:57.473262 | orchestrator | 2026-03-25 03:43:57.473293 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-25 03:43:57.473304 | orchestrator | Wednesday 25 March 2026 03:43:25 +0000 (0:00:14.813) 0:01:24.064 ******* 2026-03-25 03:43:57.473315 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.473325 | orchestrator | 2026-03-25 03:43:57.473336 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-25 03:43:57.473346 | orchestrator | Wednesday 25 March 2026 03:43:30 +0000 (0:00:04.180) 0:01:28.244 ******* 2026-03-25 03:43:57.473357 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.473367 | orchestrator | 2026-03-25 03:43:57.473378 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-25 03:43:57.473389 | orchestrator | Wednesday 25 March 2026 03:43:34 +0000 (0:00:04.794) 0:01:33.038 ******* 2026-03-25 03:43:57.473399 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:43:57.473409 | orchestrator | 2026-03-25 03:43:57.473420 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-25 03:43:57.473431 | orchestrator | Wednesday 25 March 2026 03:43:35 +0000 (0:00:00.235) 0:01:33.274 ******* 2026-03-25 03:43:57.473441 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:43:57.473452 | orchestrator | 2026-03-25 03:43:57.473463 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-25 03:43:57.473473 | orchestrator | Wednesday 25 March 2026 03:43:39 +0000 (0:00:04.015) 0:01:37.289 ******* 2026-03-25 03:43:57.473484 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:43:57.473505 | orchestrator | 2026-03-25 03:43:57.473516 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-25 03:43:57.473535 | orchestrator | Wednesday 25 March 2026 03:43:40 +0000 (0:00:01.323) 0:01:38.613 ******* 2026-03-25 03:43:57.473564 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.473582 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:43:57.473600 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:43:57.473619 | orchestrator | 2026-03-25 03:43:57.473636 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-25 03:43:57.473655 | orchestrator | Wednesday 25 March 2026 03:43:45 +0000 (0:00:05.179) 0:01:43.793 ******* 2026-03-25 03:43:57.473673 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.473693 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:43:57.473712 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:43:57.473731 | orchestrator | 2026-03-25 03:43:57.473745 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-25 03:43:57.473793 | orchestrator | Wednesday 25 March 2026 03:43:49 +0000 (0:00:04.219) 0:01:48.012 ******* 2026-03-25 03:43:57.473812 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.473831 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:43:57.473848 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:43:57.473861 | orchestrator | 2026-03-25 03:43:57.473874 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-25 03:43:57.473886 | orchestrator | Wednesday 25 March 2026 03:43:51 +0000 (0:00:01.083) 0:01:49.096 ******* 2026-03-25 03:43:57.473899 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:43:57.473912 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:43:57.473923 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:43:57.473934 | orchestrator | 2026-03-25 03:43:57.473944 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-25 03:43:57.473955 | orchestrator | Wednesday 25 March 2026 03:43:52 +0000 (0:00:01.723) 0:01:50.820 ******* 2026-03-25 03:43:57.473965 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.473975 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:43:57.473986 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:43:57.473996 | orchestrator | 2026-03-25 03:43:57.474006 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-25 03:43:57.474092 | orchestrator | Wednesday 25 March 2026 03:43:53 +0000 (0:00:01.214) 0:01:52.034 ******* 2026-03-25 03:43:57.474108 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.474118 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:43:57.474129 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:43:57.474139 | orchestrator | 2026-03-25 03:43:57.474150 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-25 03:43:57.474160 | orchestrator | Wednesday 25 March 2026 03:43:55 +0000 (0:00:01.194) 0:01:53.229 ******* 2026-03-25 03:43:57.474171 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:43:57.474181 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:43:57.474192 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:43:57.474202 | orchestrator | 2026-03-25 03:43:57.474226 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-25 03:44:23.215096 | orchestrator | Wednesday 25 March 2026 03:43:57 +0000 (0:00:02.292) 0:01:55.521 ******* 2026-03-25 03:44:23.215200 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:44:23.215209 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:44:23.215215 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:44:23.215221 | orchestrator | 2026-03-25 03:44:23.215228 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-25 03:44:23.215235 | orchestrator | Wednesday 25 March 2026 03:43:59 +0000 (0:00:01.631) 0:01:57.153 ******* 2026-03-25 03:44:23.215242 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:44:23.215250 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:44:23.215256 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:44:23.215262 | orchestrator | 2026-03-25 03:44:23.215269 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-25 03:44:23.215298 | orchestrator | Wednesday 25 March 2026 03:43:59 +0000 (0:00:00.677) 0:01:57.831 ******* 2026-03-25 03:44:23.215304 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:44:23.215310 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:44:23.215316 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:44:23.215321 | orchestrator | 2026-03-25 03:44:23.215327 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-25 03:44:23.215333 | orchestrator | Wednesday 25 March 2026 03:44:03 +0000 (0:00:03.864) 0:02:01.696 ******* 2026-03-25 03:44:23.215340 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:44:23.215347 | orchestrator | 2026-03-25 03:44:23.215353 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-25 03:44:23.215360 | orchestrator | Wednesday 25 March 2026 03:44:04 +0000 (0:00:00.643) 0:02:02.339 ******* 2026-03-25 03:44:23.215367 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:44:23.215373 | orchestrator | 2026-03-25 03:44:23.215379 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-25 03:44:23.215386 | orchestrator | Wednesday 25 March 2026 03:44:07 +0000 (0:00:03.696) 0:02:06.036 ******* 2026-03-25 03:44:23.215393 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:44:23.215399 | orchestrator | 2026-03-25 03:44:23.215406 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-25 03:44:23.215413 | orchestrator | Wednesday 25 March 2026 03:44:11 +0000 (0:00:03.049) 0:02:09.085 ******* 2026-03-25 03:44:23.215420 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-25 03:44:23.215427 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-25 03:44:23.215435 | orchestrator | 2026-03-25 03:44:23.215441 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-25 03:44:23.215448 | orchestrator | Wednesday 25 March 2026 03:44:17 +0000 (0:00:06.300) 0:02:15.386 ******* 2026-03-25 03:44:23.215455 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:44:23.215462 | orchestrator | 2026-03-25 03:44:23.215468 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-25 03:44:23.215476 | orchestrator | Wednesday 25 March 2026 03:44:20 +0000 (0:00:03.312) 0:02:18.699 ******* 2026-03-25 03:44:23.215483 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:44:23.215489 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:44:23.215497 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:44:23.215503 | orchestrator | 2026-03-25 03:44:23.215510 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-25 03:44:23.215517 | orchestrator | Wednesday 25 March 2026 03:44:21 +0000 (0:00:00.594) 0:02:19.293 ******* 2026-03-25 03:44:23.215541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:23.215567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:23.215583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:23.215590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:44:23.215597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:44:23.215607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:44:23.215614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:23.215621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:23.215640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:24.908350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:24.908457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:24.908478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:24.908484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:44:24.908489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:44:24.908512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:44:24.908516 | orchestrator | 2026-03-25 03:44:24.908521 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-25 03:44:24.908526 | orchestrator | Wednesday 25 March 2026 03:44:23 +0000 (0:00:02.413) 0:02:21.706 ******* 2026-03-25 03:44:24.908530 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:44:24.908535 | orchestrator | 2026-03-25 03:44:24.908539 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-25 03:44:24.908543 | orchestrator | Wednesday 25 March 2026 03:44:23 +0000 (0:00:00.142) 0:02:21.849 ******* 2026-03-25 03:44:24.908546 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:44:24.908561 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:44:24.908566 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:44:24.908569 | orchestrator | 2026-03-25 03:44:24.908573 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-25 03:44:24.908577 | orchestrator | Wednesday 25 March 2026 03:44:24 +0000 (0:00:00.360) 0:02:22.210 ******* 2026-03-25 03:44:24.908582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 03:44:24.908587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 03:44:24.908596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 03:44:24.908604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 03:44:24.908608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:44:24.908612 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:44:24.908621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 03:44:29.835197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 03:44:29.835308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 03:44:29.835336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 03:44:29.835368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:44:29.835377 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:44:29.835386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 03:44:29.835394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 03:44:29.835417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 03:44:29.835425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 03:44:29.835436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:44:29.835448 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:44:29.835455 | orchestrator | 2026-03-25 03:44:29.835463 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-25 03:44:29.835471 | orchestrator | Wednesday 25 March 2026 03:44:25 +0000 (0:00:00.859) 0:02:23.069 ******* 2026-03-25 03:44:29.835479 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:44:29.835486 | orchestrator | 2026-03-25 03:44:29.835494 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-25 03:44:29.835500 | orchestrator | Wednesday 25 March 2026 03:44:25 +0000 (0:00:00.907) 0:02:23.977 ******* 2026-03-25 03:44:29.835508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:29.835517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:29.835529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:31.374295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:44:31.374433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:44:31.374445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:44:31.374454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:31.374462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:31.374468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:31.374491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:31.374509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:31.374517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:31.374524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:44:31.374597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:44:31.374608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:44:31.374615 | orchestrator | 2026-03-25 03:44:31.374624 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-25 03:44:31.374633 | orchestrator | Wednesday 25 March 2026 03:44:30 +0000 (0:00:04.802) 0:02:28.779 ******* 2026-03-25 03:44:31.374651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 03:44:31.497491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 03:44:31.497582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 03:44:31.497594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 03:44:31.497601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:44:31.497610 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:44:31.497619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 03:44:31.497626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 03:44:31.497667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 03:44:31.497679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 03:44:31.497686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:44:31.497693 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:44:31.497699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 03:44:31.497706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 03:44:31.497713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 03:44:31.497731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 03:44:32.516320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:44:32.516408 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:44:32.516416 | orchestrator | 2026-03-25 03:44:32.516422 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-25 03:44:32.516427 | orchestrator | Wednesday 25 March 2026 03:44:31 +0000 (0:00:00.776) 0:02:29.555 ******* 2026-03-25 03:44:32.516433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 03:44:32.516439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 03:44:32.516444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 03:44:32.516468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 03:44:32.516492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 03:44:32.516497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:44:32.516501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 03:44:32.516505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 03:44:32.516509 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:44:32.516513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 03:44:32.516520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:44:32.516524 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:44:32.516535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 03:44:36.954639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 03:44:36.954797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 03:44:36.954812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 03:44:36.954821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 03:44:36.954852 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:44:36.954861 | orchestrator | 2026-03-25 03:44:36.954870 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-25 03:44:36.954884 | orchestrator | Wednesday 25 March 2026 03:44:33 +0000 (0:00:01.580) 0:02:31.136 ******* 2026-03-25 03:44:36.954897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:36.954948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:36.954961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:36.954987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:44:36.955018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:44:36.955031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:44:36.955044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:36.955073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:54.135056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:54.135158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:54.135171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:54.135200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:44:54.135208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:44:54.135216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:44:54.135271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:44:54.135281 | orchestrator | 2026-03-25 03:44:54.135289 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-25 03:44:54.135298 | orchestrator | Wednesday 25 March 2026 03:44:37 +0000 (0:00:04.764) 0:02:35.901 ******* 2026-03-25 03:44:54.135305 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-25 03:44:54.135314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-25 03:44:54.135321 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-25 03:44:54.135328 | orchestrator | 2026-03-25 03:44:54.135335 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-25 03:44:54.135342 | orchestrator | Wednesday 25 March 2026 03:44:39 +0000 (0:00:01.643) 0:02:37.544 ******* 2026-03-25 03:44:54.135349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:54.135364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:54.135371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:44:54.135387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:45:09.777515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:45:09.777668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:45:09.777762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:45:09.777786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:45:09.777805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:45:09.777825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:45:09.777889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:45:09.777911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:45:09.777944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:45:09.777965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:45:09.777984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:45:09.778005 | orchestrator | 2026-03-25 03:45:09.778104 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-25 03:45:09.778129 | orchestrator | Wednesday 25 March 2026 03:44:57 +0000 (0:00:18.278) 0:02:55.823 ******* 2026-03-25 03:45:09.778151 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:45:09.778173 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:45:09.778193 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:45:09.778214 | orchestrator | 2026-03-25 03:45:09.778234 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-25 03:45:09.778254 | orchestrator | Wednesday 25 March 2026 03:44:59 +0000 (0:00:01.768) 0:02:57.592 ******* 2026-03-25 03:45:09.778275 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-25 03:45:09.778297 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-25 03:45:09.778318 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-25 03:45:09.778337 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-25 03:45:09.778357 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-25 03:45:09.778376 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-25 03:45:09.778395 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-25 03:45:09.778415 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-25 03:45:09.778434 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-25 03:45:09.778454 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-25 03:45:09.778484 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-25 03:45:09.778504 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-25 03:45:09.778523 | orchestrator | 2026-03-25 03:45:09.778542 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-25 03:45:09.778858 | orchestrator | Wednesday 25 March 2026 03:45:04 +0000 (0:00:05.037) 0:03:02.629 ******* 2026-03-25 03:45:09.778910 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-25 03:45:09.778929 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-25 03:45:09.778969 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-25 03:45:18.002252 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-25 03:45:18.002330 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-25 03:45:18.002336 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-25 03:45:18.002341 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-25 03:45:18.002345 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-25 03:45:18.002349 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-25 03:45:18.002353 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-25 03:45:18.002357 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-25 03:45:18.002361 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-25 03:45:18.002365 | orchestrator | 2026-03-25 03:45:18.002370 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-25 03:45:18.002375 | orchestrator | Wednesday 25 March 2026 03:45:09 +0000 (0:00:05.202) 0:03:07.831 ******* 2026-03-25 03:45:18.002379 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-25 03:45:18.002383 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-25 03:45:18.002387 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-25 03:45:18.002391 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-25 03:45:18.002395 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-25 03:45:18.002399 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-25 03:45:18.002402 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-25 03:45:18.002407 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-25 03:45:18.002410 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-25 03:45:18.002415 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-25 03:45:18.002422 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-25 03:45:18.002431 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-25 03:45:18.002438 | orchestrator | 2026-03-25 03:45:18.002445 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-25 03:45:18.002451 | orchestrator | Wednesday 25 March 2026 03:45:14 +0000 (0:00:05.093) 0:03:12.925 ******* 2026-03-25 03:45:18.002459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:45:18.002469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:45:18.002535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 03:45:18.002544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:45:18.002551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:45:18.002558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-25 03:45:18.002565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:45:18.002572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:45:18.002589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-25 03:45:18.002601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:46:48.452618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:46:48.452745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-25 03:46:48.452757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:46:48.452764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:46:48.452790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-25 03:46:48.452796 | orchestrator | 2026-03-25 03:46:48.452802 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-25 03:46:48.452821 | orchestrator | Wednesday 25 March 2026 03:45:18 +0000 (0:00:04.000) 0:03:16.926 ******* 2026-03-25 03:46:48.452827 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:46:48.452833 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:46:48.452839 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:46:48.452844 | orchestrator | 2026-03-25 03:46:48.452849 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-25 03:46:48.452855 | orchestrator | Wednesday 25 March 2026 03:45:19 +0000 (0:00:00.594) 0:03:17.520 ******* 2026-03-25 03:46:48.452860 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.452865 | orchestrator | 2026-03-25 03:46:48.452870 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-25 03:46:48.452876 | orchestrator | Wednesday 25 March 2026 03:45:21 +0000 (0:00:02.007) 0:03:19.528 ******* 2026-03-25 03:46:48.452881 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.452886 | orchestrator | 2026-03-25 03:46:48.452891 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-25 03:46:48.452896 | orchestrator | Wednesday 25 March 2026 03:45:23 +0000 (0:00:02.019) 0:03:21.547 ******* 2026-03-25 03:46:48.452901 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.452907 | orchestrator | 2026-03-25 03:46:48.452912 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-25 03:46:48.452918 | orchestrator | Wednesday 25 March 2026 03:45:25 +0000 (0:00:02.172) 0:03:23.720 ******* 2026-03-25 03:46:48.452937 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.452942 | orchestrator | 2026-03-25 03:46:48.452948 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-25 03:46:48.452953 | orchestrator | Wednesday 25 March 2026 03:45:27 +0000 (0:00:02.073) 0:03:25.794 ******* 2026-03-25 03:46:48.452958 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.452963 | orchestrator | 2026-03-25 03:46:48.452968 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-25 03:46:48.452973 | orchestrator | Wednesday 25 March 2026 03:45:49 +0000 (0:00:21.573) 0:03:47.367 ******* 2026-03-25 03:46:48.452978 | orchestrator | 2026-03-25 03:46:48.452983 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-25 03:46:48.452988 | orchestrator | Wednesday 25 March 2026 03:45:49 +0000 (0:00:00.074) 0:03:47.442 ******* 2026-03-25 03:46:48.452993 | orchestrator | 2026-03-25 03:46:48.452999 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-25 03:46:48.453004 | orchestrator | Wednesday 25 March 2026 03:45:49 +0000 (0:00:00.073) 0:03:47.516 ******* 2026-03-25 03:46:48.453009 | orchestrator | 2026-03-25 03:46:48.453014 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-25 03:46:48.453019 | orchestrator | Wednesday 25 March 2026 03:45:49 +0000 (0:00:00.076) 0:03:47.592 ******* 2026-03-25 03:46:48.453024 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.453029 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:46:48.453044 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:46:48.453049 | orchestrator | 2026-03-25 03:46:48.453054 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-25 03:46:48.453059 | orchestrator | Wednesday 25 March 2026 03:46:05 +0000 (0:00:16.277) 0:04:03.870 ******* 2026-03-25 03:46:48.453064 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.453069 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:46:48.453075 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:46:48.453080 | orchestrator | 2026-03-25 03:46:48.453085 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-25 03:46:48.453090 | orchestrator | Wednesday 25 March 2026 03:46:17 +0000 (0:00:11.561) 0:04:15.432 ******* 2026-03-25 03:46:48.453095 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.453100 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:46:48.453105 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:46:48.453110 | orchestrator | 2026-03-25 03:46:48.453115 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-25 03:46:48.453120 | orchestrator | Wednesday 25 March 2026 03:46:27 +0000 (0:00:10.061) 0:04:25.493 ******* 2026-03-25 03:46:48.453126 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.453131 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:46:48.453136 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:46:48.453141 | orchestrator | 2026-03-25 03:46:48.453146 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-25 03:46:48.453151 | orchestrator | Wednesday 25 March 2026 03:46:37 +0000 (0:00:10.092) 0:04:35.586 ******* 2026-03-25 03:46:48.453156 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:46:48.453161 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:46:48.453166 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:46:48.453171 | orchestrator | 2026-03-25 03:46:48.453176 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:46:48.453183 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 03:46:48.453190 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 03:46:48.453195 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 03:46:48.453200 | orchestrator | 2026-03-25 03:46:48.453206 | orchestrator | 2026-03-25 03:46:48.453211 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:46:48.453216 | orchestrator | Wednesday 25 March 2026 03:46:48 +0000 (0:00:10.896) 0:04:46.483 ******* 2026-03-25 03:46:48.453221 | orchestrator | =============================================================================== 2026-03-25 03:46:48.453226 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.57s 2026-03-25 03:46:48.453231 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.28s 2026-03-25 03:46:48.453236 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.28s 2026-03-25 03:46:48.453245 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.81s 2026-03-25 03:46:48.453250 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.58s 2026-03-25 03:46:48.453255 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.56s 2026-03-25 03:46:48.453260 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.90s 2026-03-25 03:46:48.453265 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.09s 2026-03-25 03:46:48.453270 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.06s 2026-03-25 03:46:48.453275 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.83s 2026-03-25 03:46:48.453280 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.88s 2026-03-25 03:46:48.453290 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.72s 2026-03-25 03:46:48.453295 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.30s 2026-03-25 03:46:48.453300 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.06s 2026-03-25 03:46:48.453308 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.20s 2026-03-25 03:46:48.897621 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.18s 2026-03-25 03:46:48.897766 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.09s 2026-03-25 03:46:48.897787 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.04s 2026-03-25 03:46:48.897797 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.80s 2026-03-25 03:46:48.897806 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 4.79s 2026-03-25 03:46:51.793337 | orchestrator | 2026-03-25 03:46:51 | INFO  | Task cb97f8f8-4175-40c7-a0b4-22bd4a3db040 (ceilometer) was prepared for execution. 2026-03-25 03:46:51.793466 | orchestrator | 2026-03-25 03:46:51 | INFO  | It takes a moment until task cb97f8f8-4175-40c7-a0b4-22bd4a3db040 (ceilometer) has been started and output is visible here. 2026-03-25 03:47:15.867441 | orchestrator | 2026-03-25 03:47:15.867549 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:47:15.867563 | orchestrator | 2026-03-25 03:47:15.867574 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:47:15.867584 | orchestrator | Wednesday 25 March 2026 03:46:56 +0000 (0:00:00.299) 0:00:00.299 ******* 2026-03-25 03:47:15.867595 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:47:15.867606 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:47:15.867616 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:47:15.867625 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:47:15.867635 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:47:15.867644 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:47:15.867654 | orchestrator | 2026-03-25 03:47:15.867663 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:47:15.867734 | orchestrator | Wednesday 25 March 2026 03:46:57 +0000 (0:00:00.905) 0:00:01.204 ******* 2026-03-25 03:47:15.867745 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-03-25 03:47:15.867754 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-03-25 03:47:15.867764 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-03-25 03:47:15.867773 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-03-25 03:47:15.867783 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-03-25 03:47:15.867792 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-03-25 03:47:15.867801 | orchestrator | 2026-03-25 03:47:15.867811 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-03-25 03:47:15.867820 | orchestrator | 2026-03-25 03:47:15.867830 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-25 03:47:15.867839 | orchestrator | Wednesday 25 March 2026 03:46:58 +0000 (0:00:00.699) 0:00:01.904 ******* 2026-03-25 03:47:15.867850 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:47:15.867862 | orchestrator | 2026-03-25 03:47:15.867871 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-03-25 03:47:15.867881 | orchestrator | Wednesday 25 March 2026 03:46:59 +0000 (0:00:01.369) 0:00:03.274 ******* 2026-03-25 03:47:15.867890 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:15.867900 | orchestrator | 2026-03-25 03:47:15.867910 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-03-25 03:47:15.867919 | orchestrator | Wednesday 25 March 2026 03:46:59 +0000 (0:00:00.119) 0:00:03.393 ******* 2026-03-25 03:47:15.867960 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:15.867978 | orchestrator | 2026-03-25 03:47:15.867994 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-03-25 03:47:15.868010 | orchestrator | Wednesday 25 March 2026 03:47:00 +0000 (0:00:00.143) 0:00:03.537 ******* 2026-03-25 03:47:15.868026 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:47:15.868042 | orchestrator | 2026-03-25 03:47:15.868058 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-03-25 03:47:15.868073 | orchestrator | Wednesday 25 March 2026 03:47:03 +0000 (0:00:03.580) 0:00:07.117 ******* 2026-03-25 03:47:15.868090 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:47:15.868107 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-03-25 03:47:15.868123 | orchestrator | 2026-03-25 03:47:15.868140 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-03-25 03:47:15.868176 | orchestrator | Wednesday 25 March 2026 03:47:07 +0000 (0:00:03.696) 0:00:10.814 ******* 2026-03-25 03:47:15.868193 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:47:15.868210 | orchestrator | 2026-03-25 03:47:15.868262 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-03-25 03:47:15.868281 | orchestrator | Wednesday 25 March 2026 03:47:10 +0000 (0:00:03.021) 0:00:13.835 ******* 2026-03-25 03:47:15.868298 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-03-25 03:47:15.868315 | orchestrator | 2026-03-25 03:47:15.868332 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-03-25 03:47:15.868347 | orchestrator | Wednesday 25 March 2026 03:47:14 +0000 (0:00:03.888) 0:00:17.724 ******* 2026-03-25 03:47:15.868365 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:15.868382 | orchestrator | 2026-03-25 03:47:15.868399 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-03-25 03:47:15.868417 | orchestrator | Wednesday 25 March 2026 03:47:14 +0000 (0:00:00.133) 0:00:17.857 ******* 2026-03-25 03:47:15.868438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:15.868487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:15.868500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:15.868526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:15.868538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:15.868556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:15.868567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:15.868586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:21.075045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:21.075211 | orchestrator | 2026-03-25 03:47:21.075223 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-03-25 03:47:21.075233 | orchestrator | Wednesday 25 March 2026 03:47:15 +0000 (0:00:01.521) 0:00:19.379 ******* 2026-03-25 03:47:21.075239 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:47:21.075247 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 03:47:21.075253 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-25 03:47:21.075259 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-25 03:47:21.075265 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 03:47:21.075271 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 03:47:21.075277 | orchestrator | 2026-03-25 03:47:21.075284 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-03-25 03:47:21.075291 | orchestrator | Wednesday 25 March 2026 03:47:17 +0000 (0:00:01.718) 0:00:21.097 ******* 2026-03-25 03:47:21.075297 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:47:21.075304 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:47:21.075310 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:47:21.075316 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:47:21.075322 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:47:21.075327 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:47:21.075333 | orchestrator | 2026-03-25 03:47:21.075339 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-03-25 03:47:21.075345 | orchestrator | Wednesday 25 March 2026 03:47:18 +0000 (0:00:00.658) 0:00:21.755 ******* 2026-03-25 03:47:21.075351 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:21.075358 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:21.075364 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:21.075370 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:21.075375 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:21.075381 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:21.075387 | orchestrator | 2026-03-25 03:47:21.075393 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-03-25 03:47:21.075401 | orchestrator | Wednesday 25 March 2026 03:47:19 +0000 (0:00:00.895) 0:00:22.651 ******* 2026-03-25 03:47:21.075406 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:47:21.075412 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:47:21.075418 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:47:21.075425 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:47:21.075434 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:47:21.075444 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:47:21.075453 | orchestrator | 2026-03-25 03:47:21.075516 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-03-25 03:47:21.075531 | orchestrator | Wednesday 25 March 2026 03:47:19 +0000 (0:00:00.698) 0:00:23.349 ******* 2026-03-25 03:47:21.075541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:21.075553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:21.075575 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:21.075609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:21.075620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:21.075639 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:21.075650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:21.075661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:21.075698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:21.075707 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:21.075717 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:21.075726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:21.075743 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:21.075762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:26.192815 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:26.192932 | orchestrator | 2026-03-25 03:47:26.192945 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-03-25 03:47:26.192956 | orchestrator | Wednesday 25 March 2026 03:47:21 +0000 (0:00:01.237) 0:00:24.586 ******* 2026-03-25 03:47:26.192967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:26.192977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:26.192984 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:26.193011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:26.193018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:26.193054 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:26.193061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:26.193068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:26.193075 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:26.193099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:26.193104 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:26.193108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:26.193113 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:26.193121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:26.193131 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:26.193135 | orchestrator | 2026-03-25 03:47:26.193140 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-03-25 03:47:26.193147 | orchestrator | Wednesday 25 March 2026 03:47:21 +0000 (0:00:00.916) 0:00:25.503 ******* 2026-03-25 03:47:26.193151 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:47:26.193155 | orchestrator | 2026-03-25 03:47:26.193159 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-03-25 03:47:26.193164 | orchestrator | Wednesday 25 March 2026 03:47:22 +0000 (0:00:00.754) 0:00:26.257 ******* 2026-03-25 03:47:26.193168 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:47:26.193176 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:47:26.193182 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:47:26.193189 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:47:26.193197 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:47:26.193203 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:47:26.193209 | orchestrator | 2026-03-25 03:47:26.193215 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-03-25 03:47:26.193221 | orchestrator | Wednesday 25 March 2026 03:47:23 +0000 (0:00:00.903) 0:00:27.161 ******* 2026-03-25 03:47:26.193227 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:47:26.193232 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:47:26.193237 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:47:26.193243 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:47:26.193249 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:47:26.193255 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:47:26.193261 | orchestrator | 2026-03-25 03:47:26.193267 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-03-25 03:47:26.193273 | orchestrator | Wednesday 25 March 2026 03:47:24 +0000 (0:00:00.989) 0:00:28.151 ******* 2026-03-25 03:47:26.193287 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:26.193291 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:26.193300 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:26.193304 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:26.193308 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:26.193311 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:26.193315 | orchestrator | 2026-03-25 03:47:26.193319 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-03-25 03:47:26.193323 | orchestrator | Wednesday 25 March 2026 03:47:25 +0000 (0:00:00.905) 0:00:29.056 ******* 2026-03-25 03:47:26.193327 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:26.193331 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:26.193334 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:26.193338 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:26.193342 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:26.193345 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:26.193349 | orchestrator | 2026-03-25 03:47:31.711489 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-03-25 03:47:31.711575 | orchestrator | Wednesday 25 March 2026 03:47:26 +0000 (0:00:00.654) 0:00:29.710 ******* 2026-03-25 03:47:31.711581 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:47:31.711586 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-25 03:47:31.711590 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-25 03:47:31.711595 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 03:47:31.711599 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 03:47:31.711603 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 03:47:31.711607 | orchestrator | 2026-03-25 03:47:31.711611 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-03-25 03:47:31.711615 | orchestrator | Wednesday 25 March 2026 03:47:27 +0000 (0:00:01.691) 0:00:31.402 ******* 2026-03-25 03:47:31.711621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:31.711727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:31.711737 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:31.711744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:31.711750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:31.711756 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:31.711762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:31.711786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:31.711792 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:31.711807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:31.711815 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:31.711826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:31.711832 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:31.711838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:31.711844 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:31.711851 | orchestrator | 2026-03-25 03:47:31.711857 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-03-25 03:47:31.711864 | orchestrator | Wednesday 25 March 2026 03:47:28 +0000 (0:00:00.862) 0:00:32.265 ******* 2026-03-25 03:47:31.711870 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:31.711876 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:31.711882 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:31.711889 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:31.711895 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:31.711901 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:31.711906 | orchestrator | 2026-03-25 03:47:31.711912 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-03-25 03:47:31.711919 | orchestrator | Wednesday 25 March 2026 03:47:29 +0000 (0:00:00.878) 0:00:33.143 ******* 2026-03-25 03:47:31.711925 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:47:31.711931 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-25 03:47:31.711937 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-25 03:47:31.711944 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 03:47:31.711951 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 03:47:31.711957 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 03:47:31.711963 | orchestrator | 2026-03-25 03:47:31.711969 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-03-25 03:47:31.711976 | orchestrator | Wednesday 25 March 2026 03:47:31 +0000 (0:00:01.601) 0:00:34.744 ******* 2026-03-25 03:47:31.711994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:37.826323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:37.826443 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:37.826463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:37.826494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:37.826509 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:37.826522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:37.826535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:37.826573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:37.826586 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:37.826599 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:37.826633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:37.826647 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:37.826689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:37.826697 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:37.826705 | orchestrator | 2026-03-25 03:47:37.826713 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-03-25 03:47:37.826721 | orchestrator | Wednesday 25 March 2026 03:47:32 +0000 (0:00:01.184) 0:00:35.929 ******* 2026-03-25 03:47:37.826734 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:37.826742 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:37.826749 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:37.826756 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:37.826763 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:37.826770 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:37.826779 | orchestrator | 2026-03-25 03:47:37.826791 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-03-25 03:47:37.826803 | orchestrator | Wednesday 25 March 2026 03:47:33 +0000 (0:00:00.889) 0:00:36.819 ******* 2026-03-25 03:47:37.826814 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:37.826825 | orchestrator | 2026-03-25 03:47:37.826838 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-03-25 03:47:37.826852 | orchestrator | Wednesday 25 March 2026 03:47:33 +0000 (0:00:00.143) 0:00:36.963 ******* 2026-03-25 03:47:37.826865 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:37.826877 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:37.826889 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:37.826897 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:37.826905 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:37.826912 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:37.826927 | orchestrator | 2026-03-25 03:47:37.826934 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-25 03:47:37.826941 | orchestrator | Wednesday 25 March 2026 03:47:34 +0000 (0:00:00.666) 0:00:37.630 ******* 2026-03-25 03:47:37.826950 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:47:37.826959 | orchestrator | 2026-03-25 03:47:37.826966 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-03-25 03:47:37.826973 | orchestrator | Wednesday 25 March 2026 03:47:35 +0000 (0:00:01.493) 0:00:39.123 ******* 2026-03-25 03:47:37.826980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:37.826997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:38.421208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:38.421284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:38.421306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:38.421330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:38.421336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:38.421341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:38.421358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:38.421363 | orchestrator | 2026-03-25 03:47:38.421369 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-03-25 03:47:38.421375 | orchestrator | Wednesday 25 March 2026 03:47:37 +0000 (0:00:02.221) 0:00:41.344 ******* 2026-03-25 03:47:38.421381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:38.421389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:38.421398 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:38.421404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:38.421408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:38.421413 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:38.421418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:38.421427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:40.531234 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:40.531337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:40.531353 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:40.531378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:40.531407 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:40.531415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:40.531422 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:40.531429 | orchestrator | 2026-03-25 03:47:40.531438 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-03-25 03:47:40.531447 | orchestrator | Wednesday 25 March 2026 03:47:38 +0000 (0:00:00.956) 0:00:42.301 ******* 2026-03-25 03:47:40.531455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:40.531463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:40.531490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:40.531500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:40.531519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:40.531525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:40.531532 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:40.531539 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:40.531546 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:40.531553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:40.531559 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:40.531566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:40.531573 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:40.531588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:48.355554 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:48.355700 | orchestrator | 2026-03-25 03:47:48.355714 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-03-25 03:47:48.355724 | orchestrator | Wednesday 25 March 2026 03:47:40 +0000 (0:00:01.746) 0:00:44.048 ******* 2026-03-25 03:47:48.355749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:48.355759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:48.355767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:48.355774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:48.355781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:48.355803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:48.355841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:48.355849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:48.355857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:48.355864 | orchestrator | 2026-03-25 03:47:48.355870 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-03-25 03:47:48.355877 | orchestrator | Wednesday 25 March 2026 03:47:43 +0000 (0:00:02.482) 0:00:46.530 ******* 2026-03-25 03:47:48.355884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:48.355891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:48.355910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:58.569905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:58.570199 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:58.570234 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:58.570257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:58.570279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:58.570329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:58.570349 | orchestrator | 2026-03-25 03:47:58.570364 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-03-25 03:47:58.570399 | orchestrator | Wednesday 25 March 2026 03:47:48 +0000 (0:00:05.346) 0:00:51.876 ******* 2026-03-25 03:47:58.570412 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:47:58.570428 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-25 03:47:58.570446 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-25 03:47:58.570459 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 03:47:58.570470 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 03:47:58.570480 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 03:47:58.570491 | orchestrator | 2026-03-25 03:47:58.570502 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-03-25 03:47:58.570522 | orchestrator | Wednesday 25 March 2026 03:47:50 +0000 (0:00:01.790) 0:00:53.667 ******* 2026-03-25 03:47:58.570533 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:58.570544 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:58.570555 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:58.570565 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:58.570576 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:58.570586 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:58.570597 | orchestrator | 2026-03-25 03:47:58.570608 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-03-25 03:47:58.570619 | orchestrator | Wednesday 25 March 2026 03:47:50 +0000 (0:00:00.684) 0:00:54.352 ******* 2026-03-25 03:47:58.570630 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:58.570640 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:58.570678 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:47:58.570691 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:58.570715 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:47:58.570725 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:47:58.570736 | orchestrator | 2026-03-25 03:47:58.570746 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-03-25 03:47:58.570757 | orchestrator | Wednesday 25 March 2026 03:47:52 +0000 (0:00:01.742) 0:00:56.094 ******* 2026-03-25 03:47:58.570768 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:58.570778 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:47:58.570789 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:47:58.570799 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:47:58.570809 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:47:58.570820 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:47:58.570830 | orchestrator | 2026-03-25 03:47:58.570841 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-03-25 03:47:58.570851 | orchestrator | Wednesday 25 March 2026 03:47:54 +0000 (0:00:01.547) 0:00:57.641 ******* 2026-03-25 03:47:58.570862 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:47:58.570873 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-25 03:47:58.570883 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-25 03:47:58.570894 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 03:47:58.570909 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 03:47:58.570948 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 03:47:58.570974 | orchestrator | 2026-03-25 03:47:58.570992 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-03-25 03:47:58.571008 | orchestrator | Wednesday 25 March 2026 03:47:56 +0000 (0:00:01.962) 0:00:59.604 ******* 2026-03-25 03:47:58.571027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:58.571048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:58.571069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:58.571113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:59.633549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:59.633703 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:47:59.633767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:59.633793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:59.633806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:47:59.633818 | orchestrator | 2026-03-25 03:47:59.633832 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-03-25 03:47:59.633845 | orchestrator | Wednesday 25 March 2026 03:47:58 +0000 (0:00:02.480) 0:01:02.085 ******* 2026-03-25 03:47:59.633875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:59.633913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:59.633926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:59.633942 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:47:59.633951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:59.633959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:47:59.633967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:47:59.633974 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:47:59.633986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:47:59.633994 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:47:59.634002 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:47:59.634126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:48:03.437296 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:48:03.437383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:48:03.437393 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:48:03.437399 | orchestrator | 2026-03-25 03:48:03.437406 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-03-25 03:48:03.437414 | orchestrator | Wednesday 25 March 2026 03:47:59 +0000 (0:00:01.070) 0:01:03.156 ******* 2026-03-25 03:48:03.437420 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:48:03.437426 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:48:03.437432 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:48:03.437438 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:48:03.437443 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:48:03.437449 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:48:03.437455 | orchestrator | 2026-03-25 03:48:03.437462 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-03-25 03:48:03.437468 | orchestrator | Wednesday 25 March 2026 03:48:00 +0000 (0:00:00.916) 0:01:04.072 ******* 2026-03-25 03:48:03.437475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:48:03.437482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:48:03.437489 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:48:03.437520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:48:03.437555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-25 03:48:03.437576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:48:03.437584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 03:48:03.437590 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:48:03.437596 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:48:03.437602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:48:03.437608 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:48:03.437614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:48:03.437620 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:48:03.437631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-25 03:48:03.437641 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:48:03.437692 | orchestrator | 2026-03-25 03:48:03.437700 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-03-25 03:48:03.437706 | orchestrator | Wednesday 25 March 2026 03:48:01 +0000 (0:00:01.025) 0:01:05.098 ******* 2026-03-25 03:48:03.437717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:48:35.776852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:48:35.776936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-25 03:48:35.776945 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:48:35.776952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:48:35.776975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:48:35.776982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:48:35.777000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-25 03:48:35.777005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-25 03:48:35.777010 | orchestrator | 2026-03-25 03:48:35.777016 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-25 03:48:35.777025 | orchestrator | Wednesday 25 March 2026 03:48:03 +0000 (0:00:01.859) 0:01:06.957 ******* 2026-03-25 03:48:35.777033 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:48:35.777046 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:48:35.777055 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:48:35.777062 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:48:35.777070 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:48:35.777077 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:48:35.777086 | orchestrator | 2026-03-25 03:48:35.777093 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-03-25 03:48:35.777100 | orchestrator | Wednesday 25 March 2026 03:48:04 +0000 (0:00:00.747) 0:01:07.705 ******* 2026-03-25 03:48:35.777108 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:48:35.777117 | orchestrator | 2026-03-25 03:48:35.777125 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-25 03:48:35.777132 | orchestrator | Wednesday 25 March 2026 03:48:08 +0000 (0:00:04.542) 0:01:12.247 ******* 2026-03-25 03:48:35.777141 | orchestrator | 2026-03-25 03:48:35.777157 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-25 03:48:35.777166 | orchestrator | Wednesday 25 March 2026 03:48:08 +0000 (0:00:00.077) 0:01:12.324 ******* 2026-03-25 03:48:35.777172 | orchestrator | 2026-03-25 03:48:35.777176 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-25 03:48:35.777190 | orchestrator | Wednesday 25 March 2026 03:48:08 +0000 (0:00:00.094) 0:01:12.419 ******* 2026-03-25 03:48:35.777195 | orchestrator | 2026-03-25 03:48:35.777206 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-25 03:48:35.777211 | orchestrator | Wednesday 25 March 2026 03:48:09 +0000 (0:00:00.319) 0:01:12.739 ******* 2026-03-25 03:48:35.777215 | orchestrator | 2026-03-25 03:48:35.777220 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-25 03:48:35.777224 | orchestrator | Wednesday 25 March 2026 03:48:09 +0000 (0:00:00.083) 0:01:12.823 ******* 2026-03-25 03:48:35.777229 | orchestrator | 2026-03-25 03:48:35.777233 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-25 03:48:35.777238 | orchestrator | Wednesday 25 March 2026 03:48:09 +0000 (0:00:00.075) 0:01:12.898 ******* 2026-03-25 03:48:35.777242 | orchestrator | 2026-03-25 03:48:35.777247 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-03-25 03:48:35.777251 | orchestrator | Wednesday 25 March 2026 03:48:09 +0000 (0:00:00.080) 0:01:12.978 ******* 2026-03-25 03:48:35.777256 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:48:35.777261 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:48:35.777265 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:48:35.777270 | orchestrator | 2026-03-25 03:48:35.777274 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-03-25 03:48:35.777279 | orchestrator | Wednesday 25 March 2026 03:48:19 +0000 (0:00:10.270) 0:01:23.249 ******* 2026-03-25 03:48:35.777284 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:48:35.777288 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:48:35.777293 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:48:35.777297 | orchestrator | 2026-03-25 03:48:35.777302 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-03-25 03:48:35.777307 | orchestrator | Wednesday 25 March 2026 03:48:24 +0000 (0:00:04.696) 0:01:27.946 ******* 2026-03-25 03:48:35.777311 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:48:35.777316 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:48:35.777320 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:48:35.777325 | orchestrator | 2026-03-25 03:48:35.777329 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:48:35.777335 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-25 03:48:35.777342 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 03:48:35.777352 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 03:48:36.347995 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-25 03:48:36.348101 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-25 03:48:36.348118 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-25 03:48:36.348130 | orchestrator | 2026-03-25 03:48:36.348142 | orchestrator | 2026-03-25 03:48:36.348154 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:48:36.348166 | orchestrator | Wednesday 25 March 2026 03:48:35 +0000 (0:00:11.341) 0:01:39.287 ******* 2026-03-25 03:48:36.348204 | orchestrator | =============================================================================== 2026-03-25 03:48:36.348216 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.34s 2026-03-25 03:48:36.348227 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.27s 2026-03-25 03:48:36.348238 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.35s 2026-03-25 03:48:36.348249 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 4.70s 2026-03-25 03:48:36.348259 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.54s 2026-03-25 03:48:36.348270 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.89s 2026-03-25 03:48:36.348281 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.70s 2026-03-25 03:48:36.348292 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.58s 2026-03-25 03:48:36.348302 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.02s 2026-03-25 03:48:36.348313 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.48s 2026-03-25 03:48:36.348324 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.48s 2026-03-25 03:48:36.348334 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.22s 2026-03-25 03:48:36.348345 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.96s 2026-03-25 03:48:36.348381 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.86s 2026-03-25 03:48:36.348393 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.79s 2026-03-25 03:48:36.348404 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.75s 2026-03-25 03:48:36.348414 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.74s 2026-03-25 03:48:36.348426 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.72s 2026-03-25 03:48:36.348437 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.69s 2026-03-25 03:48:36.348448 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.60s 2026-03-25 03:48:39.101507 | orchestrator | 2026-03-25 03:48:39 | INFO  | Task 21d839c8-9e83-4cde-b894-9bfca8b922e4 (aodh) was prepared for execution. 2026-03-25 03:48:39.101603 | orchestrator | 2026-03-25 03:48:39 | INFO  | It takes a moment until task 21d839c8-9e83-4cde-b894-9bfca8b922e4 (aodh) has been started and output is visible here. 2026-03-25 03:49:10.493942 | orchestrator | 2026-03-25 03:49:10.494159 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:49:10.494185 | orchestrator | 2026-03-25 03:49:10.494198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:49:10.494213 | orchestrator | Wednesday 25 March 2026 03:48:43 +0000 (0:00:00.320) 0:00:00.320 ******* 2026-03-25 03:49:10.494227 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:49:10.494242 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:49:10.494254 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:49:10.494267 | orchestrator | 2026-03-25 03:49:10.494281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:49:10.494293 | orchestrator | Wednesday 25 March 2026 03:48:44 +0000 (0:00:00.380) 0:00:00.700 ******* 2026-03-25 03:49:10.494307 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-03-25 03:49:10.494321 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-03-25 03:49:10.494335 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-03-25 03:49:10.494346 | orchestrator | 2026-03-25 03:49:10.494354 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-03-25 03:49:10.494363 | orchestrator | 2026-03-25 03:49:10.494371 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-25 03:49:10.494379 | orchestrator | Wednesday 25 March 2026 03:48:44 +0000 (0:00:00.481) 0:00:01.182 ******* 2026-03-25 03:49:10.494417 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:49:10.494432 | orchestrator | 2026-03-25 03:49:10.494443 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-03-25 03:49:10.494454 | orchestrator | Wednesday 25 March 2026 03:48:45 +0000 (0:00:00.657) 0:00:01.839 ******* 2026-03-25 03:49:10.494465 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-03-25 03:49:10.494478 | orchestrator | 2026-03-25 03:49:10.494490 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-03-25 03:49:10.494503 | orchestrator | Wednesday 25 March 2026 03:48:48 +0000 (0:00:03.375) 0:00:05.215 ******* 2026-03-25 03:49:10.494516 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-03-25 03:49:10.494530 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-03-25 03:49:10.494542 | orchestrator | 2026-03-25 03:49:10.494553 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-03-25 03:49:10.494564 | orchestrator | Wednesday 25 March 2026 03:48:54 +0000 (0:00:06.090) 0:00:11.305 ******* 2026-03-25 03:49:10.494577 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:49:10.494590 | orchestrator | 2026-03-25 03:49:10.494603 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-03-25 03:49:10.494615 | orchestrator | Wednesday 25 March 2026 03:48:58 +0000 (0:00:03.321) 0:00:14.627 ******* 2026-03-25 03:49:10.494656 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:49:10.494671 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-03-25 03:49:10.494685 | orchestrator | 2026-03-25 03:49:10.494700 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-03-25 03:49:10.494713 | orchestrator | Wednesday 25 March 2026 03:49:01 +0000 (0:00:03.726) 0:00:18.353 ******* 2026-03-25 03:49:10.494728 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:49:10.494741 | orchestrator | 2026-03-25 03:49:10.494753 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-03-25 03:49:10.494764 | orchestrator | Wednesday 25 March 2026 03:49:04 +0000 (0:00:03.064) 0:00:21.418 ******* 2026-03-25 03:49:10.494777 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-03-25 03:49:10.494790 | orchestrator | 2026-03-25 03:49:10.494804 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-03-25 03:49:10.494818 | orchestrator | Wednesday 25 March 2026 03:49:08 +0000 (0:00:03.532) 0:00:24.950 ******* 2026-03-25 03:49:10.494836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:10.494882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:10.494912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:10.494928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:10.494944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:10.494958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:10.494973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:10.494992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:11.975038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:11.975169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:11.975185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:11.975197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:11.975208 | orchestrator | 2026-03-25 03:49:11.975221 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-03-25 03:49:11.975233 | orchestrator | Wednesday 25 March 2026 03:49:10 +0000 (0:00:01.974) 0:00:26.925 ******* 2026-03-25 03:49:11.975242 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:49:11.975252 | orchestrator | 2026-03-25 03:49:11.975263 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-03-25 03:49:11.975272 | orchestrator | Wednesday 25 March 2026 03:49:10 +0000 (0:00:00.149) 0:00:27.074 ******* 2026-03-25 03:49:11.975281 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:49:11.975291 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:49:11.975302 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:49:11.975312 | orchestrator | 2026-03-25 03:49:11.975321 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-03-25 03:49:11.975331 | orchestrator | Wednesday 25 March 2026 03:49:11 +0000 (0:00:00.570) 0:00:27.644 ******* 2026-03-25 03:49:11.975342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 03:49:11.975401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 03:49:11.975410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:49:11.975416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 03:49:11.975423 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:49:11.975429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 03:49:11.975435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 03:49:11.975447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:49:11.975459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 03:49:16.915999 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:49:16.916110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 03:49:16.916127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 03:49:16.916139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:49:16.916150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 03:49:16.916188 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:49:16.916200 | orchestrator | 2026-03-25 03:49:16.916211 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-25 03:49:16.916222 | orchestrator | Wednesday 25 March 2026 03:49:11 +0000 (0:00:00.756) 0:00:28.400 ******* 2026-03-25 03:49:16.916232 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:49:16.916243 | orchestrator | 2026-03-25 03:49:16.916252 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-03-25 03:49:16.916261 | orchestrator | Wednesday 25 March 2026 03:49:12 +0000 (0:00:00.827) 0:00:29.228 ******* 2026-03-25 03:49:16.916270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:16.916297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:16.916308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:16.916318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:16.916327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:16.916345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:16.916354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:16.916371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:17.668309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:17.668395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:17.668404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:17.668429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:17.668435 | orchestrator | 2026-03-25 03:49:17.668441 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-03-25 03:49:17.668448 | orchestrator | Wednesday 25 March 2026 03:49:16 +0000 (0:00:04.119) 0:00:33.348 ******* 2026-03-25 03:49:17.668458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 03:49:17.668464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 03:49:17.668482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:49:17.668487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 03:49:17.668492 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:49:17.668498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 03:49:17.668507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 03:49:17.668512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:49:17.668517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 03:49:17.668522 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:49:17.668532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 03:49:18.856131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 03:49:18.856280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:49:18.856304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 03:49:18.856322 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:49:18.856339 | orchestrator | 2026-03-25 03:49:18.856355 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-03-25 03:49:18.856372 | orchestrator | Wednesday 25 March 2026 03:49:17 +0000 (0:00:00.753) 0:00:34.101 ******* 2026-03-25 03:49:18.856389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 03:49:18.856405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 03:49:18.856422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:49:18.856461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 03:49:18.856489 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:49:18.856504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 03:49:18.856520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 03:49:18.856530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:49:18.856539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 03:49:18.856548 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:49:18.856565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-25 03:49:22.768528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 03:49:22.768710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 03:49:22.768725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 03:49:22.768733 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:49:22.768742 | orchestrator | 2026-03-25 03:49:22.768750 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-03-25 03:49:22.768759 | orchestrator | Wednesday 25 March 2026 03:49:18 +0000 (0:00:01.188) 0:00:35.290 ******* 2026-03-25 03:49:22.768766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:22.768776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:22.768798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:22.768819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:22.768832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:22.768844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:22.768858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:22.768870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:22.768882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:22.768903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022543 | orchestrator | 2026-03-25 03:49:32.022556 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-03-25 03:49:32.022569 | orchestrator | Wednesday 25 March 2026 03:49:22 +0000 (0:00:03.912) 0:00:39.202 ******* 2026-03-25 03:49:32.022582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:32.022596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:32.022719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:32.022755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:32.022816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:37.103939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:37.104027 | orchestrator | 2026-03-25 03:49:37.104035 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-03-25 03:49:37.104041 | orchestrator | Wednesday 25 March 2026 03:49:32 +0000 (0:00:09.244) 0:00:48.447 ******* 2026-03-25 03:49:37.104045 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:49:37.104050 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:49:37.104054 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:49:37.104058 | orchestrator | 2026-03-25 03:49:37.104062 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-03-25 03:49:37.104066 | orchestrator | Wednesday 25 March 2026 03:49:33 +0000 (0:00:01.801) 0:00:50.248 ******* 2026-03-25 03:49:37.104072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:37.104119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:37.104125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-25 03:49:37.104141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:37.104146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:37.104150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-25 03:49:37.104154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:37.104163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:37.104167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:37.104172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:49:37.104180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:50:30.145421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-25 03:50:30.145534 | orchestrator | 2026-03-25 03:50:30.145553 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-25 03:50:30.145567 | orchestrator | Wednesday 25 March 2026 03:49:37 +0000 (0:00:03.285) 0:00:53.534 ******* 2026-03-25 03:50:30.145580 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:50:30.145636 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:50:30.145645 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:50:30.145652 | orchestrator | 2026-03-25 03:50:30.145660 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-03-25 03:50:30.145667 | orchestrator | Wednesday 25 March 2026 03:49:37 +0000 (0:00:00.411) 0:00:53.946 ******* 2026-03-25 03:50:30.145674 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:50:30.145681 | orchestrator | 2026-03-25 03:50:30.145688 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-03-25 03:50:30.145718 | orchestrator | Wednesday 25 March 2026 03:49:39 +0000 (0:00:02.017) 0:00:55.964 ******* 2026-03-25 03:50:30.145725 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:50:30.145732 | orchestrator | 2026-03-25 03:50:30.145738 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-03-25 03:50:30.145745 | orchestrator | Wednesday 25 March 2026 03:49:41 +0000 (0:00:02.110) 0:00:58.074 ******* 2026-03-25 03:50:30.145752 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:50:30.145759 | orchestrator | 2026-03-25 03:50:30.145765 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-25 03:50:30.145772 | orchestrator | Wednesday 25 March 2026 03:49:54 +0000 (0:00:12.550) 0:01:10.625 ******* 2026-03-25 03:50:30.145779 | orchestrator | 2026-03-25 03:50:30.145786 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-25 03:50:30.145792 | orchestrator | Wednesday 25 March 2026 03:49:54 +0000 (0:00:00.090) 0:01:10.716 ******* 2026-03-25 03:50:30.145799 | orchestrator | 2026-03-25 03:50:30.145806 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-25 03:50:30.145812 | orchestrator | Wednesday 25 March 2026 03:49:54 +0000 (0:00:00.083) 0:01:10.800 ******* 2026-03-25 03:50:30.145819 | orchestrator | 2026-03-25 03:50:30.145826 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-03-25 03:50:30.145833 | orchestrator | Wednesday 25 March 2026 03:49:54 +0000 (0:00:00.306) 0:01:11.106 ******* 2026-03-25 03:50:30.145841 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:50:30.145848 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:50:30.145854 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:50:30.145861 | orchestrator | 2026-03-25 03:50:30.145868 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-03-25 03:50:30.145875 | orchestrator | Wednesday 25 March 2026 03:50:05 +0000 (0:00:10.761) 0:01:21.868 ******* 2026-03-25 03:50:30.145881 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:50:30.145888 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:50:30.145895 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:50:30.145902 | orchestrator | 2026-03-25 03:50:30.145908 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-03-25 03:50:30.145915 | orchestrator | Wednesday 25 March 2026 03:50:13 +0000 (0:00:08.463) 0:01:30.332 ******* 2026-03-25 03:50:30.145922 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:50:30.145928 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:50:30.145935 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:50:30.145942 | orchestrator | 2026-03-25 03:50:30.145949 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-03-25 03:50:30.145956 | orchestrator | Wednesday 25 March 2026 03:50:19 +0000 (0:00:05.392) 0:01:35.725 ******* 2026-03-25 03:50:30.145964 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:50:30.145972 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:50:30.145980 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:50:30.145987 | orchestrator | 2026-03-25 03:50:30.145995 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:50:30.146004 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 03:50:30.146074 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 03:50:30.146090 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 03:50:30.146102 | orchestrator | 2026-03-25 03:50:30.146111 | orchestrator | 2026-03-25 03:50:30.146119 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:50:30.146127 | orchestrator | Wednesday 25 March 2026 03:50:29 +0000 (0:00:10.414) 0:01:46.139 ******* 2026-03-25 03:50:30.146142 | orchestrator | =============================================================================== 2026-03-25 03:50:30.146150 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.55s 2026-03-25 03:50:30.146158 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.76s 2026-03-25 03:50:30.146180 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.41s 2026-03-25 03:50:30.146188 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.24s 2026-03-25 03:50:30.146195 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 8.46s 2026-03-25 03:50:30.146203 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.09s 2026-03-25 03:50:30.146211 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 5.39s 2026-03-25 03:50:30.146218 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.12s 2026-03-25 03:50:30.146226 | orchestrator | aodh : Copying over config.json files for services ---------------------- 3.91s 2026-03-25 03:50:30.146232 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.73s 2026-03-25 03:50:30.146239 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.53s 2026-03-25 03:50:30.146246 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.38s 2026-03-25 03:50:30.146252 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.32s 2026-03-25 03:50:30.146259 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.29s 2026-03-25 03:50:30.146265 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.06s 2026-03-25 03:50:30.146272 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.11s 2026-03-25 03:50:30.146279 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.02s 2026-03-25 03:50:30.146285 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 1.97s 2026-03-25 03:50:30.146292 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.80s 2026-03-25 03:50:30.146299 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.19s 2026-03-25 03:50:32.897532 | orchestrator | 2026-03-25 03:50:32 | INFO  | Task ee1abdd8-3f69-4c6e-be54-2414ec82b69d (kolla-ceph-rgw) was prepared for execution. 2026-03-25 03:50:32.897635 | orchestrator | 2026-03-25 03:50:32 | INFO  | It takes a moment until task ee1abdd8-3f69-4c6e-be54-2414ec82b69d (kolla-ceph-rgw) has been started and output is visible here. 2026-03-25 03:51:12.645976 | orchestrator | 2026-03-25 03:51:12.646096 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:51:12.646104 | orchestrator | 2026-03-25 03:51:12.646110 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:51:12.646114 | orchestrator | Wednesday 25 March 2026 03:50:37 +0000 (0:00:00.313) 0:00:00.313 ******* 2026-03-25 03:51:12.646119 | orchestrator | ok: [testbed-manager] 2026-03-25 03:51:12.646124 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:51:12.646129 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:51:12.646133 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:51:12.646137 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:51:12.646141 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:51:12.646145 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:51:12.646149 | orchestrator | 2026-03-25 03:51:12.646153 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:51:12.646157 | orchestrator | Wednesday 25 March 2026 03:50:38 +0000 (0:00:01.054) 0:00:01.367 ******* 2026-03-25 03:51:12.646162 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-25 03:51:12.646167 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-25 03:51:12.646171 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-25 03:51:12.646175 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-25 03:51:12.646198 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-25 03:51:12.646202 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-25 03:51:12.646206 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-25 03:51:12.646223 | orchestrator | 2026-03-25 03:51:12.646227 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-25 03:51:12.646237 | orchestrator | 2026-03-25 03:51:12.646241 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-25 03:51:12.646245 | orchestrator | Wednesday 25 March 2026 03:50:39 +0000 (0:00:00.914) 0:00:02.281 ******* 2026-03-25 03:51:12.646249 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:51:12.646255 | orchestrator | 2026-03-25 03:51:12.646259 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-25 03:51:12.646263 | orchestrator | Wednesday 25 March 2026 03:50:41 +0000 (0:00:01.740) 0:00:04.021 ******* 2026-03-25 03:51:12.646267 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-25 03:51:12.646271 | orchestrator | 2026-03-25 03:51:12.646275 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-25 03:51:12.646279 | orchestrator | Wednesday 25 March 2026 03:50:45 +0000 (0:00:04.219) 0:00:08.240 ******* 2026-03-25 03:51:12.646284 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-25 03:51:12.646290 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-25 03:51:12.646294 | orchestrator | 2026-03-25 03:51:12.646298 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-25 03:51:12.646301 | orchestrator | Wednesday 25 March 2026 03:50:52 +0000 (0:00:06.814) 0:00:15.054 ******* 2026-03-25 03:51:12.646306 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-25 03:51:12.646309 | orchestrator | 2026-03-25 03:51:12.646313 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-25 03:51:12.646317 | orchestrator | Wednesday 25 March 2026 03:50:56 +0000 (0:00:03.458) 0:00:18.513 ******* 2026-03-25 03:51:12.646321 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:51:12.646325 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-25 03:51:12.646329 | orchestrator | 2026-03-25 03:51:12.646333 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-25 03:51:12.646337 | orchestrator | Wednesday 25 March 2026 03:50:59 +0000 (0:00:03.956) 0:00:22.470 ******* 2026-03-25 03:51:12.646341 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-25 03:51:12.646345 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-25 03:51:12.646349 | orchestrator | 2026-03-25 03:51:12.646353 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-25 03:51:12.646357 | orchestrator | Wednesday 25 March 2026 03:51:06 +0000 (0:00:06.570) 0:00:29.040 ******* 2026-03-25 03:51:12.646361 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-25 03:51:12.646364 | orchestrator | 2026-03-25 03:51:12.646368 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:51:12.646373 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:12.646377 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:12.646381 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:12.646385 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:12.646394 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:12.646409 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:12.646414 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:12.646418 | orchestrator | 2026-03-25 03:51:12.646422 | orchestrator | 2026-03-25 03:51:12.646426 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:51:12.646440 | orchestrator | Wednesday 25 March 2026 03:51:12 +0000 (0:00:05.499) 0:00:34.539 ******* 2026-03-25 03:51:12.646444 | orchestrator | =============================================================================== 2026-03-25 03:51:12.646448 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.81s 2026-03-25 03:51:12.646452 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.57s 2026-03-25 03:51:12.646456 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.50s 2026-03-25 03:51:12.646460 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.22s 2026-03-25 03:51:12.646464 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.96s 2026-03-25 03:51:12.646468 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.46s 2026-03-25 03:51:12.646472 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.74s 2026-03-25 03:51:12.646476 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2026-03-25 03:51:12.646479 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2026-03-25 03:51:15.526999 | orchestrator | 2026-03-25 03:51:15 | INFO  | Task e963d4a8-a50d-45e6-b305-98fb128f5f1d (gnocchi) was prepared for execution. 2026-03-25 03:51:15.527095 | orchestrator | 2026-03-25 03:51:15 | INFO  | It takes a moment until task e963d4a8-a50d-45e6-b305-98fb128f5f1d (gnocchi) has been started and output is visible here. 2026-03-25 03:51:22.181174 | orchestrator | 2026-03-25 03:51:22.181264 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:51:22.181273 | orchestrator | 2026-03-25 03:51:22.181277 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:51:22.181281 | orchestrator | Wednesday 25 March 2026 03:51:20 +0000 (0:00:00.492) 0:00:00.492 ******* 2026-03-25 03:51:22.181285 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:51:22.181290 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:51:22.181294 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:51:22.181298 | orchestrator | 2026-03-25 03:51:22.181302 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:51:22.181306 | orchestrator | Wednesday 25 March 2026 03:51:21 +0000 (0:00:00.454) 0:00:00.947 ******* 2026-03-25 03:51:22.181310 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-03-25 03:51:22.181314 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-03-25 03:51:22.181319 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-03-25 03:51:22.181323 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-03-25 03:51:22.181326 | orchestrator | 2026-03-25 03:51:22.181330 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-03-25 03:51:22.181334 | orchestrator | skipping: no hosts matched 2026-03-25 03:51:22.181339 | orchestrator | 2026-03-25 03:51:22.181342 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:51:22.181346 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:22.181373 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:22.181377 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:51:22.181381 | orchestrator | 2026-03-25 03:51:22.181385 | orchestrator | 2026-03-25 03:51:22.181388 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:51:22.181392 | orchestrator | Wednesday 25 March 2026 03:51:21 +0000 (0:00:00.471) 0:00:01.419 ******* 2026-03-25 03:51:22.181396 | orchestrator | =============================================================================== 2026-03-25 03:51:22.181399 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-03-25 03:51:22.181403 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2026-03-25 03:51:25.091772 | orchestrator | 2026-03-25 03:51:25 | INFO  | Task 869603d1-2132-49a2-acb6-a8b307d5b68b (manila) was prepared for execution. 2026-03-25 03:51:25.091901 | orchestrator | 2026-03-25 03:51:25 | INFO  | It takes a moment until task 869603d1-2132-49a2-acb6-a8b307d5b68b (manila) has been started and output is visible here. 2026-03-25 03:52:06.174838 | orchestrator | 2026-03-25 03:52:06.174960 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:52:06.174973 | orchestrator | 2026-03-25 03:52:06.174980 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:52:06.174987 | orchestrator | Wednesday 25 March 2026 03:51:30 +0000 (0:00:00.354) 0:00:00.354 ******* 2026-03-25 03:52:06.174994 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:52:06.175002 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:52:06.175009 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:52:06.175015 | orchestrator | 2026-03-25 03:52:06.175021 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:52:06.175028 | orchestrator | Wednesday 25 March 2026 03:51:30 +0000 (0:00:00.422) 0:00:00.777 ******* 2026-03-25 03:52:06.175035 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-03-25 03:52:06.175040 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-03-25 03:52:06.175045 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-03-25 03:52:06.175049 | orchestrator | 2026-03-25 03:52:06.175053 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-03-25 03:52:06.175057 | orchestrator | 2026-03-25 03:52:06.175072 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-25 03:52:06.175077 | orchestrator | Wednesday 25 March 2026 03:51:31 +0000 (0:00:00.516) 0:00:01.293 ******* 2026-03-25 03:52:06.175081 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:52:06.175086 | orchestrator | 2026-03-25 03:52:06.175090 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-25 03:52:06.175093 | orchestrator | Wednesday 25 March 2026 03:51:31 +0000 (0:00:00.665) 0:00:01.959 ******* 2026-03-25 03:52:06.175097 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:52:06.175102 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:52:06.175106 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:52:06.175118 | orchestrator | 2026-03-25 03:52:06.175122 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-03-25 03:52:06.175126 | orchestrator | Wednesday 25 March 2026 03:51:32 +0000 (0:00:00.623) 0:00:02.583 ******* 2026-03-25 03:52:06.175130 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-03-25 03:52:06.175134 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-03-25 03:52:06.175137 | orchestrator | 2026-03-25 03:52:06.175141 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-03-25 03:52:06.175147 | orchestrator | Wednesday 25 March 2026 03:51:38 +0000 (0:00:06.083) 0:00:08.667 ******* 2026-03-25 03:52:06.175178 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-03-25 03:52:06.175187 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-03-25 03:52:06.175194 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-03-25 03:52:06.175200 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-03-25 03:52:06.175208 | orchestrator | 2026-03-25 03:52:06.175212 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-03-25 03:52:06.175216 | orchestrator | Wednesday 25 March 2026 03:51:50 +0000 (0:00:11.972) 0:00:20.639 ******* 2026-03-25 03:52:06.175220 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 03:52:06.175224 | orchestrator | 2026-03-25 03:52:06.175227 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-03-25 03:52:06.175231 | orchestrator | Wednesday 25 March 2026 03:51:53 +0000 (0:00:03.119) 0:00:23.759 ******* 2026-03-25 03:52:06.175235 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 03:52:06.175239 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-03-25 03:52:06.175243 | orchestrator | 2026-03-25 03:52:06.175247 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-03-25 03:52:06.175251 | orchestrator | Wednesday 25 March 2026 03:51:57 +0000 (0:00:03.666) 0:00:27.425 ******* 2026-03-25 03:52:06.175255 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 03:52:06.175259 | orchestrator | 2026-03-25 03:52:06.175262 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-03-25 03:52:06.175266 | orchestrator | Wednesday 25 March 2026 03:52:00 +0000 (0:00:02.988) 0:00:30.413 ******* 2026-03-25 03:52:06.175270 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-03-25 03:52:06.175275 | orchestrator | 2026-03-25 03:52:06.175281 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-03-25 03:52:06.175287 | orchestrator | Wednesday 25 March 2026 03:52:03 +0000 (0:00:03.502) 0:00:33.916 ******* 2026-03-25 03:52:06.175313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:06.175330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:06.175337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:06.175351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:06.175360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:06.175367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:06.175377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:16.932733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:16.932831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:16.932837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:16.932842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:16.932846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:16.932850 | orchestrator | 2026-03-25 03:52:16.932855 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-25 03:52:16.932861 | orchestrator | Wednesday 25 March 2026 03:52:06 +0000 (0:00:02.314) 0:00:36.230 ******* 2026-03-25 03:52:16.932865 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:52:16.932869 | orchestrator | 2026-03-25 03:52:16.932874 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-03-25 03:52:16.932877 | orchestrator | Wednesday 25 March 2026 03:52:06 +0000 (0:00:00.644) 0:00:36.874 ******* 2026-03-25 03:52:16.932881 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:52:16.932886 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:52:16.932890 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:52:16.932894 | orchestrator | 2026-03-25 03:52:16.932898 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-03-25 03:52:16.932902 | orchestrator | Wednesday 25 March 2026 03:52:08 +0000 (0:00:01.224) 0:00:38.099 ******* 2026-03-25 03:52:16.932906 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-25 03:52:16.932921 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-25 03:52:16.932930 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-25 03:52:16.932934 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-25 03:52:16.932941 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-25 03:52:16.932945 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-25 03:52:16.932949 | orchestrator | 2026-03-25 03:52:16.932952 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-03-25 03:52:16.932956 | orchestrator | Wednesday 25 March 2026 03:52:10 +0000 (0:00:01.990) 0:00:40.090 ******* 2026-03-25 03:52:16.932960 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-25 03:52:16.932964 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-25 03:52:16.932968 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-25 03:52:16.932971 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-25 03:52:16.932975 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-25 03:52:16.932979 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-25 03:52:16.932983 | orchestrator | 2026-03-25 03:52:16.932986 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-03-25 03:52:16.932990 | orchestrator | Wednesday 25 March 2026 03:52:11 +0000 (0:00:01.194) 0:00:41.284 ******* 2026-03-25 03:52:16.932995 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-03-25 03:52:16.932999 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-03-25 03:52:16.933003 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-03-25 03:52:16.933006 | orchestrator | 2026-03-25 03:52:16.933010 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-03-25 03:52:16.933014 | orchestrator | Wednesday 25 March 2026 03:52:11 +0000 (0:00:00.658) 0:00:41.942 ******* 2026-03-25 03:52:16.933018 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:52:16.933022 | orchestrator | 2026-03-25 03:52:16.933025 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-03-25 03:52:16.933029 | orchestrator | Wednesday 25 March 2026 03:52:12 +0000 (0:00:00.136) 0:00:42.079 ******* 2026-03-25 03:52:16.933033 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:52:16.933037 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:52:16.933040 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:52:16.933044 | orchestrator | 2026-03-25 03:52:16.933048 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-25 03:52:16.933052 | orchestrator | Wednesday 25 March 2026 03:52:12 +0000 (0:00:00.451) 0:00:42.531 ******* 2026-03-25 03:52:16.933055 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:52:16.933063 | orchestrator | 2026-03-25 03:52:16.933067 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-03-25 03:52:16.933070 | orchestrator | Wednesday 25 March 2026 03:52:13 +0000 (0:00:00.604) 0:00:43.135 ******* 2026-03-25 03:52:16.933078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:18.015457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:18.015530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:18.015537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:18.015543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:18.015604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:18.015629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:18.015639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:18.015644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:18.015649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:18.015653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:18.015661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:18.015665 | orchestrator | 2026-03-25 03:52:18.015670 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-03-25 03:52:18.015675 | orchestrator | Wednesday 25 March 2026 03:52:17 +0000 (0:00:03.887) 0:00:47.022 ******* 2026-03-25 03:52:18.015683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 03:52:18.832041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:52:18.832134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:18.832142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 03:52:18.832147 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:52:18.832153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 03:52:18.832178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:52:18.832182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:18.832203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 03:52:18.832208 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:52:18.832212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 03:52:18.832216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:52:18.832224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:18.832228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 03:52:18.832232 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:52:18.832236 | orchestrator | 2026-03-25 03:52:18.832241 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-03-25 03:52:18.832247 | orchestrator | Wednesday 25 March 2026 03:52:18 +0000 (0:00:01.090) 0:00:48.113 ******* 2026-03-25 03:52:18.832258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 03:52:23.515476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:52:23.515628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:23.515643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 03:52:23.515671 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:52:23.515680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 03:52:23.515690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:52:23.515718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:23.515745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 03:52:23.515754 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:52:23.515763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 03:52:23.515779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:52:23.515788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:23.515796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 03:52:23.515805 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:52:23.515815 | orchestrator | 2026-03-25 03:52:23.515824 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-03-25 03:52:23.515834 | orchestrator | Wednesday 25 March 2026 03:52:19 +0000 (0:00:01.199) 0:00:49.312 ******* 2026-03-25 03:52:23.515856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:31.120683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:31.120784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:31.120792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:31.120798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:31.120812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:31.120828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:31.120834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:31.120843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:31.120848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:31.120852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:31.120856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:31.120860 | orchestrator | 2026-03-25 03:52:31.120865 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-03-25 03:52:31.120871 | orchestrator | Wednesday 25 March 2026 03:52:23 +0000 (0:00:04.501) 0:00:53.814 ******* 2026-03-25 03:52:31.120882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:35.930844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:35.930926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:52:35.930934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:35.930941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:35.930959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:35.930975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:35.930995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:35.930999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:35.931003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:35.931007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:35.931011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:52:35.931015 | orchestrator | 2026-03-25 03:52:35.931023 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-03-25 03:52:35.931028 | orchestrator | Wednesday 25 March 2026 03:52:31 +0000 (0:00:07.368) 0:01:01.182 ******* 2026-03-25 03:52:35.931033 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-03-25 03:52:35.931042 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-03-25 03:52:35.931046 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-03-25 03:52:35.931049 | orchestrator | 2026-03-25 03:52:35.931054 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-03-25 03:52:35.931057 | orchestrator | Wednesday 25 March 2026 03:52:35 +0000 (0:00:04.010) 0:01:05.193 ******* 2026-03-25 03:52:35.931069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 03:52:39.326382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:52:39.326466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:39.326474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 03:52:39.326479 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:52:39.326496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 03:52:39.326516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:52:39.326520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:39.326536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 03:52:39.326590 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:52:39.326597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-25 03:52:39.326603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 03:52:39.326610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 03:52:39.326626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 03:52:39.326630 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:52:39.326634 | orchestrator | 2026-03-25 03:52:39.326639 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-03-25 03:52:39.326644 | orchestrator | Wednesday 25 March 2026 03:52:36 +0000 (0:00:00.802) 0:01:05.995 ******* 2026-03-25 03:52:39.326653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:53:18.749232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:53:18.749291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-25 03:53:18.749298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:53:18.749323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:53:18.749327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-25 03:53:18.749340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:53:18.749345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:53:18.749349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-25 03:53:18.749356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:53:18.749370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:53:18.749377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-25 03:53:18.749383 | orchestrator | 2026-03-25 03:53:18.749390 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-03-25 03:53:18.749397 | orchestrator | Wednesday 25 March 2026 03:52:39 +0000 (0:00:03.407) 0:01:09.403 ******* 2026-03-25 03:53:18.749403 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:53:18.749410 | orchestrator | 2026-03-25 03:53:18.749415 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-03-25 03:53:18.749421 | orchestrator | Wednesday 25 March 2026 03:52:41 +0000 (0:00:02.044) 0:01:11.447 ******* 2026-03-25 03:53:18.749427 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:53:18.749433 | orchestrator | 2026-03-25 03:53:18.749440 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-03-25 03:53:18.749445 | orchestrator | Wednesday 25 March 2026 03:52:43 +0000 (0:00:02.181) 0:01:13.629 ******* 2026-03-25 03:53:18.749451 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:53:18.749457 | orchestrator | 2026-03-25 03:53:18.749462 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-25 03:53:18.749468 | orchestrator | Wednesday 25 March 2026 03:53:18 +0000 (0:00:34.803) 0:01:48.432 ******* 2026-03-25 03:53:18.749475 | orchestrator | 2026-03-25 03:53:18.749486 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-25 03:54:04.483159 | orchestrator | Wednesday 25 March 2026 03:53:18 +0000 (0:00:00.090) 0:01:48.522 ******* 2026-03-25 03:54:04.483248 | orchestrator | 2026-03-25 03:54:04.483254 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-25 03:54:04.483259 | orchestrator | Wednesday 25 March 2026 03:53:18 +0000 (0:00:00.082) 0:01:48.605 ******* 2026-03-25 03:54:04.483263 | orchestrator | 2026-03-25 03:54:04.483268 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-03-25 03:54:04.483272 | orchestrator | Wednesday 25 March 2026 03:53:18 +0000 (0:00:00.097) 0:01:48.703 ******* 2026-03-25 03:54:04.483276 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:54:04.483281 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:54:04.483285 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:54:04.483288 | orchestrator | 2026-03-25 03:54:04.483292 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-03-25 03:54:04.483297 | orchestrator | Wednesday 25 March 2026 03:53:28 +0000 (0:00:10.197) 0:01:58.901 ******* 2026-03-25 03:54:04.483322 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:54:04.483326 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:54:04.483330 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:54:04.483333 | orchestrator | 2026-03-25 03:54:04.483337 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-03-25 03:54:04.483341 | orchestrator | Wednesday 25 March 2026 03:53:40 +0000 (0:00:11.694) 0:02:10.595 ******* 2026-03-25 03:54:04.483345 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:54:04.483349 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:54:04.483353 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:54:04.483357 | orchestrator | 2026-03-25 03:54:04.483361 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-03-25 03:54:04.483365 | orchestrator | Wednesday 25 March 2026 03:53:46 +0000 (0:00:05.512) 0:02:16.107 ******* 2026-03-25 03:54:04.483368 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:54:04.483372 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:54:04.483376 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:54:04.483380 | orchestrator | 2026-03-25 03:54:04.483384 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:54:04.483389 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 03:54:04.483395 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 03:54:04.483398 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 03:54:04.483402 | orchestrator | 2026-03-25 03:54:04.483406 | orchestrator | 2026-03-25 03:54:04.483410 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:54:04.483414 | orchestrator | Wednesday 25 March 2026 03:54:03 +0000 (0:00:17.793) 0:02:33.901 ******* 2026-03-25 03:54:04.483417 | orchestrator | =============================================================================== 2026-03-25 03:54:04.483421 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 34.80s 2026-03-25 03:54:04.483425 | orchestrator | manila : Restart manila-share container -------------------------------- 17.79s 2026-03-25 03:54:04.483439 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 11.97s 2026-03-25 03:54:04.483443 | orchestrator | manila : Restart manila-data container --------------------------------- 11.69s 2026-03-25 03:54:04.483446 | orchestrator | manila : Restart manila-api container ---------------------------------- 10.20s 2026-03-25 03:54:04.483450 | orchestrator | manila : Copying over manila.conf --------------------------------------- 7.37s 2026-03-25 03:54:04.483454 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.08s 2026-03-25 03:54:04.483457 | orchestrator | manila : Restart manila-scheduler container ----------------------------- 5.51s 2026-03-25 03:54:04.483461 | orchestrator | manila : Copying over config.json files for services -------------------- 4.50s 2026-03-25 03:54:04.483465 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.01s 2026-03-25 03:54:04.483469 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.89s 2026-03-25 03:54:04.483472 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.67s 2026-03-25 03:54:04.483476 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.50s 2026-03-25 03:54:04.483480 | orchestrator | manila : Check manila containers ---------------------------------------- 3.41s 2026-03-25 03:54:04.483484 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.12s 2026-03-25 03:54:04.483488 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 2.99s 2026-03-25 03:54:04.483492 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.31s 2026-03-25 03:54:04.483495 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.18s 2026-03-25 03:54:04.483522 | orchestrator | manila : Creating Manila database --------------------------------------- 2.04s 2026-03-25 03:54:04.483527 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.99s 2026-03-25 03:54:04.906368 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-03-25 03:54:17.427076 | orchestrator | 2026-03-25 03:54:17 | INFO  | Task e33ea4c3-8eb2-4050-bfba-8929f27d36cb (netdata) was prepared for execution. 2026-03-25 03:54:17.427177 | orchestrator | 2026-03-25 03:54:17 | INFO  | It takes a moment until task e33ea4c3-8eb2-4050-bfba-8929f27d36cb (netdata) has been started and output is visible here. 2026-03-25 03:55:57.446876 | orchestrator | 2026-03-25 03:55:57.446988 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:55:57.447000 | orchestrator | 2026-03-25 03:55:57.447007 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:55:57.447015 | orchestrator | Wednesday 25 March 2026 03:54:22 +0000 (0:00:00.263) 0:00:00.263 ******* 2026-03-25 03:55:57.447022 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-25 03:55:57.447030 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-25 03:55:57.447036 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-25 03:55:57.447042 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-25 03:55:57.447048 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-25 03:55:57.447054 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-25 03:55:57.447060 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-25 03:55:57.447067 | orchestrator | 2026-03-25 03:55:57.447073 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-25 03:55:57.447079 | orchestrator | 2026-03-25 03:55:57.447085 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-25 03:55:57.447091 | orchestrator | Wednesday 25 March 2026 03:54:23 +0000 (0:00:01.000) 0:00:01.263 ******* 2026-03-25 03:55:57.447108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:55:57.447115 | orchestrator | 2026-03-25 03:55:57.447121 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-25 03:55:57.447127 | orchestrator | Wednesday 25 March 2026 03:54:25 +0000 (0:00:01.647) 0:00:02.911 ******* 2026-03-25 03:55:57.447134 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:55:57.447142 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:55:57.447148 | orchestrator | ok: [testbed-manager] 2026-03-25 03:55:57.447153 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:55:57.447159 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:55:57.447165 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:55:57.447171 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:55:57.447178 | orchestrator | 2026-03-25 03:55:57.447184 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-25 03:55:57.447191 | orchestrator | Wednesday 25 March 2026 03:54:27 +0000 (0:00:02.014) 0:00:04.926 ******* 2026-03-25 03:55:57.447197 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:55:57.447204 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:55:57.447210 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:55:57.447215 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:55:57.447221 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:55:57.447227 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:55:57.447233 | orchestrator | ok: [testbed-manager] 2026-03-25 03:55:57.447239 | orchestrator | 2026-03-25 03:55:57.447245 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-25 03:55:57.447252 | orchestrator | Wednesday 25 March 2026 03:54:29 +0000 (0:00:02.361) 0:00:07.288 ******* 2026-03-25 03:55:57.447258 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:55:57.447289 | orchestrator | changed: [testbed-manager] 2026-03-25 03:55:57.447296 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:55:57.447302 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:55:57.447308 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:55:57.447328 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:55:57.447335 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:55:57.447340 | orchestrator | 2026-03-25 03:55:57.447434 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-25 03:55:57.447443 | orchestrator | Wednesday 25 March 2026 03:54:31 +0000 (0:00:01.606) 0:00:08.895 ******* 2026-03-25 03:55:57.447449 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:55:57.447455 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:55:57.447461 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:55:57.447467 | orchestrator | changed: [testbed-manager] 2026-03-25 03:55:57.447474 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:55:57.447481 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:55:57.447486 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:55:57.447493 | orchestrator | 2026-03-25 03:55:57.447500 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-25 03:55:57.447507 | orchestrator | Wednesday 25 March 2026 03:54:49 +0000 (0:00:17.941) 0:00:26.836 ******* 2026-03-25 03:55:57.447514 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:55:57.447521 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:55:57.447528 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:55:57.447535 | orchestrator | changed: [testbed-manager] 2026-03-25 03:55:57.447542 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:55:57.447549 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:55:57.447556 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:55:57.447563 | orchestrator | 2026-03-25 03:55:57.447570 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-25 03:55:57.447577 | orchestrator | Wednesday 25 March 2026 03:55:29 +0000 (0:00:40.378) 0:01:07.214 ******* 2026-03-25 03:55:57.447585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:55:57.447593 | orchestrator | 2026-03-25 03:55:57.447601 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-25 03:55:57.447608 | orchestrator | Wednesday 25 March 2026 03:55:31 +0000 (0:00:01.783) 0:01:08.997 ******* 2026-03-25 03:55:57.447615 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-25 03:55:57.447635 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-25 03:55:57.447642 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-25 03:55:57.447649 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-25 03:55:57.447678 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-25 03:55:57.447685 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-25 03:55:57.447691 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-25 03:55:57.447698 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-25 03:55:57.447704 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-25 03:55:57.447711 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-25 03:55:57.447718 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-25 03:55:57.447725 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-25 03:55:57.447731 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-25 03:55:57.447737 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-25 03:55:57.447744 | orchestrator | 2026-03-25 03:55:57.447752 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-25 03:55:57.447760 | orchestrator | Wednesday 25 March 2026 03:55:35 +0000 (0:00:03.919) 0:01:12.917 ******* 2026-03-25 03:55:57.447779 | orchestrator | ok: [testbed-manager] 2026-03-25 03:55:57.447785 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:55:57.447791 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:55:57.447798 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:55:57.447804 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:55:57.447810 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:55:57.447817 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:55:57.447822 | orchestrator | 2026-03-25 03:55:57.447829 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-25 03:55:57.447835 | orchestrator | Wednesday 25 March 2026 03:55:36 +0000 (0:00:01.387) 0:01:14.304 ******* 2026-03-25 03:55:57.447841 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:55:57.447848 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:55:57.447854 | orchestrator | changed: [testbed-manager] 2026-03-25 03:55:57.447860 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:55:57.447866 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:55:57.447873 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:55:57.447879 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:55:57.447886 | orchestrator | 2026-03-25 03:55:57.447892 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-25 03:55:57.447899 | orchestrator | Wednesday 25 March 2026 03:55:38 +0000 (0:00:01.432) 0:01:15.737 ******* 2026-03-25 03:55:57.447905 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:55:57.447911 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:55:57.447917 | orchestrator | ok: [testbed-manager] 2026-03-25 03:55:57.447923 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:55:57.447929 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:55:57.447935 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:55:57.447942 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:55:57.447948 | orchestrator | 2026-03-25 03:55:57.447954 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-25 03:55:57.447961 | orchestrator | Wednesday 25 March 2026 03:55:39 +0000 (0:00:01.260) 0:01:16.998 ******* 2026-03-25 03:55:57.447967 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:55:57.447973 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:55:57.447979 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:55:57.447985 | orchestrator | ok: [testbed-manager] 2026-03-25 03:55:57.447992 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:55:57.448008 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:55:57.448015 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:55:57.448021 | orchestrator | 2026-03-25 03:55:57.448028 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-25 03:55:57.448034 | orchestrator | Wednesday 25 March 2026 03:55:42 +0000 (0:00:02.791) 0:01:19.789 ******* 2026-03-25 03:55:57.448049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-25 03:55:57.448058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:55:57.448066 | orchestrator | 2026-03-25 03:55:57.448072 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-25 03:55:57.448079 | orchestrator | Wednesday 25 March 2026 03:55:43 +0000 (0:00:01.573) 0:01:21.363 ******* 2026-03-25 03:55:57.448085 | orchestrator | changed: [testbed-manager] 2026-03-25 03:55:57.448091 | orchestrator | 2026-03-25 03:55:57.448097 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-25 03:55:57.448103 | orchestrator | Wednesday 25 March 2026 03:55:46 +0000 (0:00:02.415) 0:01:23.778 ******* 2026-03-25 03:55:57.448110 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:55:57.448116 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:55:57.448122 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:55:57.448128 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:55:57.448135 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:55:57.448148 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:55:57.448154 | orchestrator | changed: [testbed-manager] 2026-03-25 03:55:57.448160 | orchestrator | 2026-03-25 03:55:57.448167 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:55:57.448173 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:55:57.448181 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:55:57.448187 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:55:57.448194 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:55:57.448206 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:55:58.035701 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:55:58.035774 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 03:55:58.035780 | orchestrator | 2026-03-25 03:55:58.035784 | orchestrator | 2026-03-25 03:55:58.035789 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:55:58.035795 | orchestrator | Wednesday 25 March 2026 03:55:57 +0000 (0:00:11.262) 0:01:35.040 ******* 2026-03-25 03:55:58.035798 | orchestrator | =============================================================================== 2026-03-25 03:55:58.035803 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.38s 2026-03-25 03:55:58.035807 | orchestrator | osism.services.netdata : Add repository -------------------------------- 17.94s 2026-03-25 03:55:58.035810 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.26s 2026-03-25 03:55:58.035814 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.92s 2026-03-25 03:55:58.035818 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.79s 2026-03-25 03:55:58.035821 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.42s 2026-03-25 03:55:58.035825 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.36s 2026-03-25 03:55:58.035830 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.01s 2026-03-25 03:55:58.035835 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.78s 2026-03-25 03:55:58.035841 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.65s 2026-03-25 03:55:58.035846 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.61s 2026-03-25 03:55:58.035852 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.57s 2026-03-25 03:55:58.035858 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.43s 2026-03-25 03:55:58.035864 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.39s 2026-03-25 03:55:58.035870 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.26s 2026-03-25 03:55:58.035876 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2026-03-25 03:56:03.092586 | orchestrator | 2026-03-25 03:56:03 | INFO  | Task 03bbb7f4-c4db-48f6-9367-e9c951d57bed (prometheus) was prepared for execution. 2026-03-25 03:56:03.092673 | orchestrator | 2026-03-25 03:56:03 | INFO  | It takes a moment until task 03bbb7f4-c4db-48f6-9367-e9c951d57bed (prometheus) has been started and output is visible here. 2026-03-25 03:56:14.339593 | orchestrator | 2026-03-25 03:56:14.339699 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:56:14.339707 | orchestrator | 2026-03-25 03:56:14.339722 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:56:14.339726 | orchestrator | Wednesday 25 March 2026 03:56:08 +0000 (0:00:00.344) 0:00:00.344 ******* 2026-03-25 03:56:14.339731 | orchestrator | ok: [testbed-manager] 2026-03-25 03:56:14.339736 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:56:14.339739 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:56:14.339743 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:56:14.339747 | orchestrator | ok: [testbed-node-3] 2026-03-25 03:56:14.339751 | orchestrator | ok: [testbed-node-4] 2026-03-25 03:56:14.339755 | orchestrator | ok: [testbed-node-5] 2026-03-25 03:56:14.339758 | orchestrator | 2026-03-25 03:56:14.339762 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:56:14.339766 | orchestrator | Wednesday 25 March 2026 03:56:09 +0000 (0:00:01.038) 0:00:01.383 ******* 2026-03-25 03:56:14.339770 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-25 03:56:14.339775 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-25 03:56:14.339778 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-25 03:56:14.339782 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-25 03:56:14.339786 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-25 03:56:14.339789 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-25 03:56:14.339793 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-25 03:56:14.339797 | orchestrator | 2026-03-25 03:56:14.339800 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-25 03:56:14.339804 | orchestrator | 2026-03-25 03:56:14.339808 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-25 03:56:14.339812 | orchestrator | Wednesday 25 March 2026 03:56:10 +0000 (0:00:01.156) 0:00:02.540 ******* 2026-03-25 03:56:14.339816 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:56:14.339821 | orchestrator | 2026-03-25 03:56:14.339825 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-25 03:56:14.339829 | orchestrator | Wednesday 25 March 2026 03:56:12 +0000 (0:00:01.649) 0:00:04.190 ******* 2026-03-25 03:56:14.339835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:14.339842 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-25 03:56:14.339848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:14.339857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:14.339879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:14.339883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:14.339888 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:14.339892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:14.339896 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:14.339901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:14.339912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:14.339922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:15.232950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:15.233047 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:15.233059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:15.233067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:15.233073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:15.233101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:56:15.233111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:15.233142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:15.233151 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-25 03:56:15.233159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:56:15.233166 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:15.233173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:15.233186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:15.233194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:15.233210 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:21.166706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:56:21.166827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:21.166835 | orchestrator | 2026-03-25 03:56:21.166843 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-25 03:56:21.166849 | orchestrator | Wednesday 25 March 2026 03:56:15 +0000 (0:00:03.033) 0:00:07.223 ******* 2026-03-25 03:56:21.166855 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 03:56:21.166861 | orchestrator | 2026-03-25 03:56:21.166865 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-25 03:56:21.166868 | orchestrator | Wednesday 25 March 2026 03:56:17 +0000 (0:00:01.962) 0:00:09.186 ******* 2026-03-25 03:56:21.166872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:21.166901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:21.166906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:21.166934 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-25 03:56:21.166962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:21.166969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:21.166976 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:21.166983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:21.167002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:21.167009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:21.167016 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:21.167028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:21.167040 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:23.345164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:23.345378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:23.345444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:23.345455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:23.345462 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:23.345470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:56:23.345495 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:56:23.345526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:56:23.345533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:23.345549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:23.345555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:23.345564 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-25 03:56:23.345573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:23.345587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:23.345602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:25.092733 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:25.092937 | orchestrator | 2026-03-25 03:56:25.092954 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-25 03:56:25.092965 | orchestrator | Wednesday 25 March 2026 03:56:23 +0000 (0:00:06.144) 0:00:15.331 ******* 2026-03-25 03:56:25.092976 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-25 03:56:25.092986 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:25.092997 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:25.093060 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-25 03:56:25.093093 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.093113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:25.093121 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:56:25.093131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.093140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.093149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:25.093157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.093168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:25.093176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.093189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.786941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:25.787070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:25.787077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.787082 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:56:25.787090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.787095 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:56:25.787100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.787123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:25.787128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:25.787155 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:56:25.787173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:25.787177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:25.787181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:25.787186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:25.787190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 03:56:25.787197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 03:56:25.787201 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:56:25.787205 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:56:25.787209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:25.787222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:26.874266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 03:56:26.874421 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:56:26.874429 | orchestrator | 2026-03-25 03:56:26.874434 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-25 03:56:26.874441 | orchestrator | Wednesday 25 March 2026 03:56:25 +0000 (0:00:02.449) 0:00:17.780 ******* 2026-03-25 03:56:26.874445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:26.874450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:26.874455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:26.874460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:26.874501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:26.874520 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-25 03:56:26.874527 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:26.874532 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:26.874538 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-25 03:56:26.874543 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:26.874554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:26.874558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:26.874567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:28.402876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:28.403009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:28.403023 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:56:28.403034 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:56:28.403042 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:56:28.403050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:28.403060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:28.403137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:28.403146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:28.403153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 03:56:28.403160 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:56:28.403190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:28.403198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:28.403205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 03:56:28.403212 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:56:28.403219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:28.403233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:28.403245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 03:56:28.403252 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:56:28.403259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 03:56:28.403275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 03:56:32.572869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 03:56:32.572989 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:56:32.573002 | orchestrator | 2026-03-25 03:56:32.573010 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-25 03:56:32.573026 | orchestrator | Wednesday 25 March 2026 03:56:28 +0000 (0:00:02.606) 0:00:20.387 ******* 2026-03-25 03:56:32.573070 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-25 03:56:32.573107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:32.573119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:32.573145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:32.573156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:32.573187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:32.573200 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:32.573210 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:56:32.573220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:32.573237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:32.573247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:32.573262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:32.573273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:32.573340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:34.958634 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:34.958745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:34.958790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:34.958801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:56:34.958823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:34.958831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:56:34.958841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:56:34.958868 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-25 03:56:34.958879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:34.958895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:34.958903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:56:34.958916 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:34.958925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:34.958933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:34.958950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:56:39.527482 | orchestrator | 2026-03-25 03:56:39.527560 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-25 03:56:39.527585 | orchestrator | Wednesday 25 March 2026 03:56:34 +0000 (0:00:06.559) 0:00:26.947 ******* 2026-03-25 03:56:39.527590 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 03:56:39.527594 | orchestrator | 2026-03-25 03:56:39.527599 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-25 03:56:39.527603 | orchestrator | Wednesday 25 March 2026 03:56:36 +0000 (0:00:01.099) 0:00:28.046 ******* 2026-03-25 03:56:39.527608 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084361, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527615 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084361, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527629 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084361, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527633 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084361, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:56:39.527638 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8492982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527642 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084361, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527664 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084361, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527668 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084361, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527672 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8492982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527677 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084354, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527684 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8492982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527688 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8492982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527692 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084354, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:39.527704 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8492982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719490 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8492982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719597 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084354, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719609 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084378, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8470435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719630 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084378, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8470435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719636 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8492982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:56:41.719642 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084354, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719665 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084378, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8470435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719686 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084354, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719692 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084354, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719698 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084349, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8300433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719707 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084349, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8300433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719712 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084349, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8300433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719717 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084378, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8470435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719727 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084378, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8470435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:41.719738 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084378, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8470435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588066 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084362, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8329155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588193 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084362, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8329155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588238 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084349, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8300433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588252 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084362, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8329155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588366 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084349, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8300433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588388 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084349, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8300433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588401 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084372, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8348494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588434 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084362, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8329155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588446 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084372, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8348494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588464 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084372, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8348494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588476 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084354, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:56:43.588495 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084362, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8329155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588507 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084364, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8332374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588518 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084362, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8329155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:43.588537 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084358, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812680 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084364, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8332374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812809 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084372, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8348494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812827 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084364, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8332374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812863 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084372, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8348494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812874 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084426, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8488848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812883 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084358, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812892 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084372, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8348494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812919 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084358, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812934 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084364, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8332374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812943 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084364, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8332374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812959 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084344, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.829257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812968 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084426, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8488848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812977 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084364, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8332374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.812986 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084426, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8488848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:45.813004 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084358, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.980632 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084447, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8553708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.980853 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084358, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.980879 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084378, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8470435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:56:47.980894 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084344, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.829257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.980908 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084358, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.980923 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084426, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8488848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.980936 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084344, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.829257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.980983 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084426, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8488848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.981012 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084447, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8553708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.981025 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084424, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8480437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.981039 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084426, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8488848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.981052 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084424, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8480437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.981065 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084344, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.829257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.981079 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084447, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8553708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:47.981107 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084344, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.829257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195479 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084349, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8300433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:56:50.195582 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084352, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.831102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195592 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084344, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.829257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195598 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084447, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8553708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195605 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084352, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.831102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195610 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084424, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8480437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195649 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084447, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8553708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195667 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084447, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8553708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195673 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084424, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8480437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195679 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084346, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8297682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195684 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084346, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8297682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195690 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084424, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8480437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195695 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084362, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8329155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:56:50.195710 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084352, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.831102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:50.195743 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084370, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8340433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397572 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084352, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.831102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397655 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084424, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8480437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397663 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084370, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8340433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397668 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084367, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.83379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397672 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084352, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.831102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397708 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084346, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8297682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397712 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084442, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8546753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397728 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:56:52.397734 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084346, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8297682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397738 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084352, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.831102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397742 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084367, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.83379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397746 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084346, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8297682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397754 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084370, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8340433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397762 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084346, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8297682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397766 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084442, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8546753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:52.397772 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:56:59.370468 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084370, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8340433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370575 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084372, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8348494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:56:59.370585 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084370, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8340433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370594 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084367, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.83379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370621 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084370, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8340433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370642 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084367, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.83379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370649 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084442, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8546753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370656 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:56:59.370679 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084367, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.83379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370685 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084367, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.83379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370691 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084442, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8546753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370697 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:56:59.370704 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084442, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8546753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370714 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:56:59.370721 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084442, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8546753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-25 03:56:59.370726 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:56:59.370735 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084364, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8332374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:56:59.370747 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084358, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.832457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876645 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084426, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8488848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876762 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084344, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.829257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876774 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084447, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8553708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876802 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084424, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8480437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876811 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084352, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.831102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876832 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084346, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8297682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876840 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084370, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8340433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876861 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084367, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.83379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876868 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084442, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8546753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-25 03:57:10.876875 | orchestrator | 2026-03-25 03:57:10.876884 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-25 03:57:10.876893 | orchestrator | Wednesday 25 March 2026 03:57:07 +0000 (0:00:31.698) 0:00:59.745 ******* 2026-03-25 03:57:10.876905 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 03:57:10.876914 | orchestrator | 2026-03-25 03:57:10.876920 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-25 03:57:10.876926 | orchestrator | Wednesday 25 March 2026 03:57:08 +0000 (0:00:00.863) 0:01:00.608 ******* 2026-03-25 03:57:10.876933 | orchestrator | [WARNING]: Skipped 2026-03-25 03:57:10.876941 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.876948 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-25 03:57:10.876955 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.876961 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-25 03:57:10.876967 | orchestrator | [WARNING]: Skipped 2026-03-25 03:57:10.876974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.876980 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-25 03:57:10.876987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.876993 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-25 03:57:10.877000 | orchestrator | [WARNING]: Skipped 2026-03-25 03:57:10.877006 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877013 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-25 03:57:10.877019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877026 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-25 03:57:10.877032 | orchestrator | [WARNING]: Skipped 2026-03-25 03:57:10.877044 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877051 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-25 03:57:10.877057 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877063 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-25 03:57:10.877069 | orchestrator | [WARNING]: Skipped 2026-03-25 03:57:10.877075 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877081 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-25 03:57:10.877087 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877093 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-25 03:57:10.877099 | orchestrator | [WARNING]: Skipped 2026-03-25 03:57:10.877110 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877116 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-25 03:57:10.877122 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877128 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-25 03:57:10.877135 | orchestrator | [WARNING]: Skipped 2026-03-25 03:57:10.877141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877147 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-25 03:57:10.877153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-25 03:57:10.877159 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-25 03:57:10.877166 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 03:57:10.877172 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-25 03:57:10.877178 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:57:10.877185 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-25 03:57:10.877191 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-25 03:57:10.877197 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-25 03:57:10.877209 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-25 03:57:10.877216 | orchestrator | 2026-03-25 03:57:10.877227 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-25 03:57:47.092280 | orchestrator | Wednesday 25 March 2026 03:57:10 +0000 (0:00:02.255) 0:01:02.863 ******* 2026-03-25 03:57:47.092359 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-25 03:57:47.092367 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:57:47.092373 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-25 03:57:47.092378 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:57:47.092382 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-25 03:57:47.092387 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-25 03:57:47.092391 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:57:47.092395 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:57:47.092399 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-25 03:57:47.092403 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:57:47.092407 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-25 03:57:47.092410 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:57:47.092414 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-25 03:57:47.092418 | orchestrator | 2026-03-25 03:57:47.092423 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-25 03:57:47.092426 | orchestrator | Wednesday 25 March 2026 03:57:31 +0000 (0:00:20.187) 0:01:23.051 ******* 2026-03-25 03:57:47.092430 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-25 03:57:47.092434 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-25 03:57:47.092438 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:57:47.092442 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:57:47.092446 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-25 03:57:47.092450 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:57:47.092453 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-25 03:57:47.092457 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:57:47.092461 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-25 03:57:47.092465 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:57:47.092469 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-25 03:57:47.092472 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:57:47.092476 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-25 03:57:47.092480 | orchestrator | 2026-03-25 03:57:47.092484 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-25 03:57:47.092487 | orchestrator | Wednesday 25 March 2026 03:57:33 +0000 (0:00:02.874) 0:01:25.926 ******* 2026-03-25 03:57:47.092492 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-25 03:57:47.092498 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-25 03:57:47.092502 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:57:47.092506 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:57:47.092509 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-25 03:57:47.092530 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:57:47.092535 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-25 03:57:47.092538 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:57:47.092571 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-25 03:57:47.092580 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-25 03:57:47.092586 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:57:47.092591 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-25 03:57:47.092597 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:57:47.092603 | orchestrator | 2026-03-25 03:57:47.092609 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-25 03:57:47.092615 | orchestrator | Wednesday 25 March 2026 03:57:36 +0000 (0:00:02.258) 0:01:28.184 ******* 2026-03-25 03:57:47.092621 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 03:57:47.092626 | orchestrator | 2026-03-25 03:57:47.092633 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-25 03:57:47.092640 | orchestrator | Wednesday 25 March 2026 03:57:37 +0000 (0:00:00.923) 0:01:29.108 ******* 2026-03-25 03:57:47.092653 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:57:47.092659 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:57:47.092664 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:57:47.092670 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:57:47.092691 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:57:47.092697 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:57:47.092703 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:57:47.092710 | orchestrator | 2026-03-25 03:57:47.092716 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-25 03:57:47.092725 | orchestrator | Wednesday 25 March 2026 03:57:38 +0000 (0:00:00.906) 0:01:30.015 ******* 2026-03-25 03:57:47.092733 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:57:47.092739 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:57:47.092744 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:57:47.092750 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:57:47.092756 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:57:47.092762 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:57:47.092768 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:57:47.092773 | orchestrator | 2026-03-25 03:57:47.092780 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-25 03:57:47.092786 | orchestrator | Wednesday 25 March 2026 03:57:40 +0000 (0:00:02.328) 0:01:32.343 ******* 2026-03-25 03:57:47.092791 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-25 03:57:47.092798 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-25 03:57:47.092804 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:57:47.092810 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-25 03:57:47.092817 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-25 03:57:47.092823 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-25 03:57:47.092829 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:57:47.092835 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:57:47.092841 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:57:47.092846 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:57:47.092852 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-25 03:57:47.092867 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:57:47.092875 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-25 03:57:47.092881 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:57:47.092887 | orchestrator | 2026-03-25 03:57:47.092894 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-25 03:57:47.092900 | orchestrator | Wednesday 25 March 2026 03:57:42 +0000 (0:00:01.848) 0:01:34.192 ******* 2026-03-25 03:57:47.092907 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-25 03:57:47.092914 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:57:47.092921 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-25 03:57:47.092927 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:57:47.092934 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-25 03:57:47.092941 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:57:47.092947 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-25 03:57:47.092953 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:57:47.092960 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-25 03:57:47.092967 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:57:47.092982 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-25 03:57:47.092988 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:57:47.092994 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-25 03:57:47.093001 | orchestrator | 2026-03-25 03:57:47.093007 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-25 03:57:47.093014 | orchestrator | Wednesday 25 March 2026 03:57:44 +0000 (0:00:01.823) 0:01:36.015 ******* 2026-03-25 03:57:47.093020 | orchestrator | [WARNING]: Skipped 2026-03-25 03:57:47.093036 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-25 03:57:47.093044 | orchestrator | due to this access issue: 2026-03-25 03:57:47.093049 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-25 03:57:47.093053 | orchestrator | not a directory 2026-03-25 03:57:47.093058 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 03:57:47.093064 | orchestrator | 2026-03-25 03:57:47.093070 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-25 03:57:47.093076 | orchestrator | Wednesday 25 March 2026 03:57:45 +0000 (0:00:01.398) 0:01:37.413 ******* 2026-03-25 03:57:47.093085 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:57:47.093092 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:57:47.093100 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:57:47.093105 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:57:47.093111 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:57:47.093117 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:57:47.093123 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:57:47.093129 | orchestrator | 2026-03-25 03:57:47.093134 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-25 03:57:47.093140 | orchestrator | Wednesday 25 March 2026 03:57:46 +0000 (0:00:01.109) 0:01:38.523 ******* 2026-03-25 03:57:47.093147 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:57:47.093153 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:57:47.093158 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:57:47.093173 | orchestrator | skipping: [testbed-node-2] 2026-03-25 03:57:50.328578 | orchestrator | skipping: [testbed-node-3] 2026-03-25 03:57:50.328661 | orchestrator | skipping: [testbed-node-4] 2026-03-25 03:57:50.328686 | orchestrator | skipping: [testbed-node-5] 2026-03-25 03:57:50.328690 | orchestrator | 2026-03-25 03:57:50.328696 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-25 03:57:50.328701 | orchestrator | Wednesday 25 March 2026 03:57:47 +0000 (0:00:01.129) 0:01:39.652 ******* 2026-03-25 03:57:50.328708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:57:50.328738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:57:50.328743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:57:50.328749 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-25 03:57:50.328765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:57:50.328770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:57:50.328785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:50.328797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:50.328801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:57:50.328805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:50.328809 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-25 03:57:50.328813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:57:50.328821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:57:50.328826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:50.328839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:54.214715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:57:54.214793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:54.214800 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:57:54.214806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:57:54.214813 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:57:54.214837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:57:54.214855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-25 03:57:54.214870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:57:54.214875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-25 03:57:54.214880 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-25 03:57:54.214885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:54.214890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:54.214896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:54.214904 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 03:57:54.214908 | orchestrator | 2026-03-25 03:57:54.214913 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-25 03:57:54.214919 | orchestrator | Wednesday 25 March 2026 03:57:51 +0000 (0:00:04.267) 0:01:43.919 ******* 2026-03-25 03:57:54.214927 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-25 03:59:45.322726 | orchestrator | skipping: [testbed-manager] 2026-03-25 03:59:45.322816 | orchestrator | 2026-03-25 03:59:45.322823 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-25 03:59:45.322829 | orchestrator | Wednesday 25 March 2026 03:57:53 +0000 (0:00:01.416) 0:01:45.336 ******* 2026-03-25 03:59:45.322833 | orchestrator | 2026-03-25 03:59:45.322837 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-25 03:59:45.322842 | orchestrator | Wednesday 25 March 2026 03:57:53 +0000 (0:00:00.327) 0:01:45.664 ******* 2026-03-25 03:59:45.322846 | orchestrator | 2026-03-25 03:59:45.322850 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-25 03:59:45.322854 | orchestrator | Wednesday 25 March 2026 03:57:53 +0000 (0:00:00.086) 0:01:45.750 ******* 2026-03-25 03:59:45.322857 | orchestrator | 2026-03-25 03:59:45.322861 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-25 03:59:45.322865 | orchestrator | Wednesday 25 March 2026 03:57:53 +0000 (0:00:00.078) 0:01:45.829 ******* 2026-03-25 03:59:45.322869 | orchestrator | 2026-03-25 03:59:45.322873 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-25 03:59:45.322876 | orchestrator | Wednesday 25 March 2026 03:57:53 +0000 (0:00:00.088) 0:01:45.917 ******* 2026-03-25 03:59:45.322880 | orchestrator | 2026-03-25 03:59:45.322884 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-25 03:59:45.322888 | orchestrator | Wednesday 25 March 2026 03:57:53 +0000 (0:00:00.084) 0:01:46.001 ******* 2026-03-25 03:59:45.322892 | orchestrator | 2026-03-25 03:59:45.322899 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-25 03:59:45.322904 | orchestrator | Wednesday 25 March 2026 03:57:54 +0000 (0:00:00.095) 0:01:46.096 ******* 2026-03-25 03:59:45.322910 | orchestrator | 2026-03-25 03:59:45.322916 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-25 03:59:45.322921 | orchestrator | Wednesday 25 March 2026 03:57:54 +0000 (0:00:00.110) 0:01:46.207 ******* 2026-03-25 03:59:45.322927 | orchestrator | changed: [testbed-manager] 2026-03-25 03:59:45.322934 | orchestrator | 2026-03-25 03:59:45.322940 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-25 03:59:45.322946 | orchestrator | Wednesday 25 March 2026 03:58:18 +0000 (0:00:24.191) 0:02:10.398 ******* 2026-03-25 03:59:45.322952 | orchestrator | changed: [testbed-manager] 2026-03-25 03:59:45.322958 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:59:45.322964 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:59:45.322970 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:59:45.322977 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:59:45.322983 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:59:45.322989 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:59:45.323066 | orchestrator | 2026-03-25 03:59:45.323072 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-25 03:59:45.323076 | orchestrator | Wednesday 25 March 2026 03:58:32 +0000 (0:00:13.950) 0:02:24.349 ******* 2026-03-25 03:59:45.323080 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:59:45.323084 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:59:45.323088 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:59:45.323091 | orchestrator | 2026-03-25 03:59:45.323095 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-25 03:59:45.323100 | orchestrator | Wednesday 25 March 2026 03:58:38 +0000 (0:00:06.056) 0:02:30.405 ******* 2026-03-25 03:59:45.323104 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:59:45.323108 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:59:45.323111 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:59:45.323115 | orchestrator | 2026-03-25 03:59:45.323119 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-25 03:59:45.323123 | orchestrator | Wednesday 25 March 2026 03:58:49 +0000 (0:00:11.168) 0:02:41.573 ******* 2026-03-25 03:59:45.323126 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:59:45.323130 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:59:45.323134 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:59:45.323137 | orchestrator | changed: [testbed-manager] 2026-03-25 03:59:45.323141 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:59:45.323145 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:59:45.323148 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:59:45.323152 | orchestrator | 2026-03-25 03:59:45.323156 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-25 03:59:45.323171 | orchestrator | Wednesday 25 March 2026 03:59:04 +0000 (0:00:15.369) 0:02:56.943 ******* 2026-03-25 03:59:45.323175 | orchestrator | changed: [testbed-manager] 2026-03-25 03:59:45.323179 | orchestrator | 2026-03-25 03:59:45.323183 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-25 03:59:45.323187 | orchestrator | Wednesday 25 March 2026 03:59:17 +0000 (0:00:12.642) 0:03:09.585 ******* 2026-03-25 03:59:45.323191 | orchestrator | changed: [testbed-node-2] 2026-03-25 03:59:45.323194 | orchestrator | changed: [testbed-node-0] 2026-03-25 03:59:45.323198 | orchestrator | changed: [testbed-node-1] 2026-03-25 03:59:45.323202 | orchestrator | 2026-03-25 03:59:45.323205 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-25 03:59:45.323209 | orchestrator | Wednesday 25 March 2026 03:59:28 +0000 (0:00:10.660) 0:03:20.245 ******* 2026-03-25 03:59:45.323213 | orchestrator | changed: [testbed-manager] 2026-03-25 03:59:45.323216 | orchestrator | 2026-03-25 03:59:45.323220 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-25 03:59:45.323224 | orchestrator | Wednesday 25 March 2026 03:59:33 +0000 (0:00:05.725) 0:03:25.971 ******* 2026-03-25 03:59:45.323227 | orchestrator | changed: [testbed-node-4] 2026-03-25 03:59:45.323231 | orchestrator | changed: [testbed-node-3] 2026-03-25 03:59:45.323235 | orchestrator | changed: [testbed-node-5] 2026-03-25 03:59:45.323238 | orchestrator | 2026-03-25 03:59:45.323242 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 03:59:45.323247 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-25 03:59:45.323270 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-25 03:59:45.323277 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-25 03:59:45.323283 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-25 03:59:45.323297 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-25 03:59:45.323303 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-25 03:59:45.323309 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-25 03:59:45.323315 | orchestrator | 2026-03-25 03:59:45.323321 | orchestrator | 2026-03-25 03:59:45.323328 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 03:59:45.323334 | orchestrator | Wednesday 25 March 2026 03:59:44 +0000 (0:00:10.644) 0:03:36.615 ******* 2026-03-25 03:59:45.323340 | orchestrator | =============================================================================== 2026-03-25 03:59:45.323346 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 31.70s 2026-03-25 03:59:45.323352 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 24.19s 2026-03-25 03:59:45.323358 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 20.19s 2026-03-25 03:59:45.323365 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.37s 2026-03-25 03:59:45.323371 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.95s 2026-03-25 03:59:45.323376 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.64s 2026-03-25 03:59:45.323381 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.17s 2026-03-25 03:59:45.323388 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.66s 2026-03-25 03:59:45.323395 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.64s 2026-03-25 03:59:45.323401 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.56s 2026-03-25 03:59:45.323406 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.14s 2026-03-25 03:59:45.323414 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.06s 2026-03-25 03:59:45.323418 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.73s 2026-03-25 03:59:45.323421 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.27s 2026-03-25 03:59:45.323425 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.03s 2026-03-25 03:59:45.323429 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.87s 2026-03-25 03:59:45.323432 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.61s 2026-03-25 03:59:45.323436 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.45s 2026-03-25 03:59:45.323440 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.33s 2026-03-25 03:59:45.323444 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.26s 2026-03-25 03:59:49.137294 | orchestrator | 2026-03-25 03:59:49 | INFO  | Task f02ea523-30c6-42c5-8a65-2fd469508f75 (grafana) was prepared for execution. 2026-03-25 03:59:49.137407 | orchestrator | 2026-03-25 03:59:49 | INFO  | It takes a moment until task f02ea523-30c6-42c5-8a65-2fd469508f75 (grafana) has been started and output is visible here. 2026-03-25 03:59:59.453518 | orchestrator | 2026-03-25 03:59:59.453644 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 03:59:59.453661 | orchestrator | 2026-03-25 03:59:59.453673 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 03:59:59.453685 | orchestrator | Wednesday 25 March 2026 03:59:54 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-03-25 03:59:59.453696 | orchestrator | ok: [testbed-node-0] 2026-03-25 03:59:59.453707 | orchestrator | ok: [testbed-node-1] 2026-03-25 03:59:59.453717 | orchestrator | ok: [testbed-node-2] 2026-03-25 03:59:59.453748 | orchestrator | 2026-03-25 03:59:59.453759 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 03:59:59.453770 | orchestrator | Wednesday 25 March 2026 03:59:54 +0000 (0:00:00.330) 0:00:00.591 ******* 2026-03-25 03:59:59.453781 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-25 03:59:59.453792 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-25 03:59:59.453803 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-25 03:59:59.453813 | orchestrator | 2026-03-25 03:59:59.453824 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-25 03:59:59.453835 | orchestrator | 2026-03-25 03:59:59.453845 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-25 03:59:59.453856 | orchestrator | Wednesday 25 March 2026 03:59:54 +0000 (0:00:00.453) 0:00:01.045 ******* 2026-03-25 03:59:59.453866 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:59:59.453878 | orchestrator | 2026-03-25 03:59:59.453889 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-25 03:59:59.453899 | orchestrator | Wednesday 25 March 2026 03:59:55 +0000 (0:00:00.645) 0:00:01.690 ******* 2026-03-25 03:59:59.453914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 03:59:59.453928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 03:59:59.453939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 03:59:59.453950 | orchestrator | 2026-03-25 03:59:59.453961 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-25 03:59:59.453972 | orchestrator | Wednesday 25 March 2026 03:59:56 +0000 (0:00:00.782) 0:00:02.473 ******* 2026-03-25 03:59:59.453983 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-25 03:59:59.453994 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-25 03:59:59.454098 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 03:59:59.454112 | orchestrator | 2026-03-25 03:59:59.454124 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-25 03:59:59.454145 | orchestrator | Wednesday 25 March 2026 03:59:57 +0000 (0:00:00.846) 0:00:03.319 ******* 2026-03-25 03:59:59.454158 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 03:59:59.454170 | orchestrator | 2026-03-25 03:59:59.454193 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-25 03:59:59.454206 | orchestrator | Wednesday 25 March 2026 03:59:57 +0000 (0:00:00.575) 0:00:03.895 ******* 2026-03-25 03:59:59.454237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 03:59:59.454251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 03:59:59.454265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 03:59:59.454278 | orchestrator | 2026-03-25 03:59:59.454291 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-25 03:59:59.454303 | orchestrator | Wednesday 25 March 2026 03:59:58 +0000 (0:00:01.257) 0:00:05.153 ******* 2026-03-25 03:59:59.454316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-25 03:59:59.454329 | orchestrator | skipping: [testbed-node-0] 2026-03-25 03:59:59.454342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-25 03:59:59.454365 | orchestrator | skipping: [testbed-node-1] 2026-03-25 03:59:59.454392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-25 04:00:05.787454 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:00:05.787526 | orchestrator | 2026-03-25 04:00:05.787535 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-25 04:00:05.787541 | orchestrator | Wednesday 25 March 2026 03:59:59 +0000 (0:00:00.542) 0:00:05.695 ******* 2026-03-25 04:00:05.787548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-25 04:00:05.787555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-25 04:00:05.787561 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:00:05.787566 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:00:05.787571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-25 04:00:05.787576 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:00:05.787581 | orchestrator | 2026-03-25 04:00:05.787586 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-25 04:00:05.787591 | orchestrator | Wednesday 25 March 2026 04:00:00 +0000 (0:00:00.564) 0:00:06.259 ******* 2026-03-25 04:00:05.787596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 04:00:05.787621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 04:00:05.787638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 04:00:05.787644 | orchestrator | 2026-03-25 04:00:05.787649 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-25 04:00:05.787654 | orchestrator | Wednesday 25 March 2026 04:00:01 +0000 (0:00:01.184) 0:00:07.444 ******* 2026-03-25 04:00:05.787659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 04:00:05.787664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 04:00:05.787669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 04:00:05.787678 | orchestrator | 2026-03-25 04:00:05.787683 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-25 04:00:05.787688 | orchestrator | Wednesday 25 March 2026 04:00:02 +0000 (0:00:01.468) 0:00:08.913 ******* 2026-03-25 04:00:05.787693 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:00:05.787698 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:00:05.787703 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:00:05.787707 | orchestrator | 2026-03-25 04:00:05.787712 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-25 04:00:05.787717 | orchestrator | Wednesday 25 March 2026 04:00:02 +0000 (0:00:00.297) 0:00:09.210 ******* 2026-03-25 04:00:05.787722 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-25 04:00:05.787727 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-25 04:00:05.787732 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-25 04:00:05.787737 | orchestrator | 2026-03-25 04:00:05.787742 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-25 04:00:05.787747 | orchestrator | Wednesday 25 March 2026 04:00:04 +0000 (0:00:01.203) 0:00:10.414 ******* 2026-03-25 04:00:05.787755 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-25 04:00:05.787760 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-25 04:00:05.787765 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-25 04:00:05.787770 | orchestrator | 2026-03-25 04:00:05.787775 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-25 04:00:05.787784 | orchestrator | Wednesday 25 March 2026 04:00:05 +0000 (0:00:01.610) 0:00:12.024 ******* 2026-03-25 04:00:11.992235 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 04:00:11.992333 | orchestrator | 2026-03-25 04:00:11.992348 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-25 04:00:11.992360 | orchestrator | Wednesday 25 March 2026 04:00:06 +0000 (0:00:00.766) 0:00:12.790 ******* 2026-03-25 04:00:11.992370 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-25 04:00:11.992382 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-25 04:00:11.992392 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:00:11.992403 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:00:11.992412 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:00:11.992422 | orchestrator | 2026-03-25 04:00:11.992434 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-25 04:00:11.992444 | orchestrator | Wednesday 25 March 2026 04:00:07 +0000 (0:00:00.713) 0:00:13.504 ******* 2026-03-25 04:00:11.992453 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:00:11.992463 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:00:11.992475 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:00:11.992484 | orchestrator | 2026-03-25 04:00:11.992494 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-25 04:00:11.992505 | orchestrator | Wednesday 25 March 2026 04:00:07 +0000 (0:00:00.347) 0:00:13.851 ******* 2026-03-25 04:00:11.992519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083927, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7582972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083927, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7582972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083927, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7582972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084039, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7750335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084039, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7750335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084039, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7750335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083978, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7645638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083978, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7645638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083978, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7645638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084042, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7766552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084042, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7766552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:11.992743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084042, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7766552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083995, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7680762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083995, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7680762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083995, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7680762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084017, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7737486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084017, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7737486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084017, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7737486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083924, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7571218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083924, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7571218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083924, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7571218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083938, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7636483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083938, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7636483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083938, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7636483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:15.438917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083982, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7655578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083982, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7655578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083982, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7655578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084004, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7690423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084004, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7690423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084004, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7690423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084034, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7746892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084034, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7746892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084034, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7746892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083971, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7645638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083971, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7645638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083971, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7645638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084014, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7709844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:19.087926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084014, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7709844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084014, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7709844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083999, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.76884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083999, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.76884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083999, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.76884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083989, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7670572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083989, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7670572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083989, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7670572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083988, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7660422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083988, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7660422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.070958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083988, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7660422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.071002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084007, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.770727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.071019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084007, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.770727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:23.071053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084007, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.770727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083985, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7660422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083985, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7660422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083985, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7660422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084031, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7742083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084031, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7742083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084031, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7742083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084336, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8280432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084336, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8280432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084336, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8280432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084091, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7946634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084091, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7946634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084091, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7946634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:26.812739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084073, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7830424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.369743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084073, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7830424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.369855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084073, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7830424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.369868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084162, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8016708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.369892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084162, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8016708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.370121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084162, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8016708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.370143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084057, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.778988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.370170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084057, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.778988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.370178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084057, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.778988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.370185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084250, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8163195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.370192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084250, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8163195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.370216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084250, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8163195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.370222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084168, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.811043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:30.370237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084168, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.811043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084168, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.811043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084263, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8169844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084263, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8169844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084263, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8169844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084326, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8267417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084326, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8267417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084326, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8267417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084243, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8141835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084243, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8141835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084243, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8141835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084149, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.799879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084149, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.799879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:34.147602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084149, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.799879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.966687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084086, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.786557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.966782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084086, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.786557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.966831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084086, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.786557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.966842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084122, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7959416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.966851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084122, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7959416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.966860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084122, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7959416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.966887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084075, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7850425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.966899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084075, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7850425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.966945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084075, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7850425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.967055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084153, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8012736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.967072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084153, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8012736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.967089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084153, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8012736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:37.967116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084286, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8249722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.670829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084286, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8249722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.670934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084286, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8249722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084273, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.818625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084273, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.818625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084273, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.818625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084063, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7813728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084063, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7813728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084063, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7813728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084070, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7821562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084070, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7821562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084070, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.7821562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084232, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8130865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:00:41.671186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084232, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8130865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:02:24.337700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084232, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8130865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:02:24.337858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084269, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8174198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:02:24.337868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084269, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8174198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:02:24.337873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084269, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774403793.8174198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-25 04:02:24.337877 | orchestrator | 2026-03-25 04:02:24.337883 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-25 04:02:24.337888 | orchestrator | Wednesday 25 March 2026 04:00:42 +0000 (0:00:35.281) 0:00:49.132 ******* 2026-03-25 04:02:24.337892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 04:02:24.337931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 04:02:24.337937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-25 04:02:24.337941 | orchestrator | 2026-03-25 04:02:24.337945 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-25 04:02:24.337954 | orchestrator | Wednesday 25 March 2026 04:00:43 +0000 (0:00:01.033) 0:00:50.166 ******* 2026-03-25 04:02:24.337962 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:02:24.337968 | orchestrator | 2026-03-25 04:02:24.337971 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-25 04:02:24.337975 | orchestrator | Wednesday 25 March 2026 04:00:46 +0000 (0:00:02.261) 0:00:52.428 ******* 2026-03-25 04:02:24.337986 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:02:24.337990 | orchestrator | 2026-03-25 04:02:24.337993 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-25 04:02:24.337997 | orchestrator | Wednesday 25 March 2026 04:00:48 +0000 (0:00:02.260) 0:00:54.688 ******* 2026-03-25 04:02:24.338001 | orchestrator | 2026-03-25 04:02:24.338005 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-25 04:02:24.338008 | orchestrator | Wednesday 25 March 2026 04:00:48 +0000 (0:00:00.085) 0:00:54.774 ******* 2026-03-25 04:02:24.338050 | orchestrator | 2026-03-25 04:02:24.338054 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-25 04:02:24.338058 | orchestrator | Wednesday 25 March 2026 04:00:48 +0000 (0:00:00.091) 0:00:54.866 ******* 2026-03-25 04:02:24.338062 | orchestrator | 2026-03-25 04:02:24.338066 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-25 04:02:24.338070 | orchestrator | Wednesday 25 March 2026 04:00:48 +0000 (0:00:00.091) 0:00:54.957 ******* 2026-03-25 04:02:24.338073 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:02:24.338077 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:02:24.338081 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:02:24.338085 | orchestrator | 2026-03-25 04:02:24.338088 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-25 04:02:24.338092 | orchestrator | Wednesday 25 March 2026 04:00:56 +0000 (0:00:07.336) 0:01:02.294 ******* 2026-03-25 04:02:24.338096 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:02:24.338099 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:02:24.338108 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-25 04:02:24.338114 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-25 04:02:24.338117 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-25 04:02:24.338121 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-25 04:02:24.338125 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:02:24.338130 | orchestrator | 2026-03-25 04:02:24.338134 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-25 04:02:24.338138 | orchestrator | Wednesday 25 March 2026 04:01:46 +0000 (0:00:50.254) 0:01:52.548 ******* 2026-03-25 04:02:24.338142 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:02:24.338146 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:02:24.338149 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:02:24.338153 | orchestrator | 2026-03-25 04:02:24.338157 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-25 04:02:24.338160 | orchestrator | Wednesday 25 March 2026 04:02:19 +0000 (0:00:32.983) 0:02:25.532 ******* 2026-03-25 04:02:24.338164 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:02:24.338168 | orchestrator | 2026-03-25 04:02:24.338172 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-25 04:02:24.338175 | orchestrator | Wednesday 25 March 2026 04:02:21 +0000 (0:00:02.158) 0:02:27.691 ******* 2026-03-25 04:02:24.338179 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:02:24.338183 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:02:24.338187 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:02:24.338190 | orchestrator | 2026-03-25 04:02:24.338194 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-25 04:02:24.338198 | orchestrator | Wednesday 25 March 2026 04:02:21 +0000 (0:00:00.321) 0:02:28.013 ******* 2026-03-25 04:02:24.338203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-25 04:02:24.338212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-25 04:02:25.098583 | orchestrator | 2026-03-25 04:02:25.098697 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-25 04:02:25.098711 | orchestrator | Wednesday 25 March 2026 04:02:24 +0000 (0:00:02.557) 0:02:30.570 ******* 2026-03-25 04:02:25.098722 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:02:25.098743 | orchestrator | 2026-03-25 04:02:25.098752 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:02:25.098762 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 04:02:25.098772 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 04:02:25.098781 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 04:02:25.098788 | orchestrator | 2026-03-25 04:02:25.098796 | orchestrator | 2026-03-25 04:02:25.098867 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:02:25.098880 | orchestrator | Wednesday 25 March 2026 04:02:24 +0000 (0:00:00.325) 0:02:30.896 ******* 2026-03-25 04:02:25.098912 | orchestrator | =============================================================================== 2026-03-25 04:02:25.098920 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.25s 2026-03-25 04:02:25.098928 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.28s 2026-03-25 04:02:25.098936 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.98s 2026-03-25 04:02:25.098944 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.34s 2026-03-25 04:02:25.098953 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.56s 2026-03-25 04:02:25.098961 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.26s 2026-03-25 04:02:25.098968 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.26s 2026-03-25 04:02:25.098978 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.16s 2026-03-25 04:02:25.098986 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.61s 2026-03-25 04:02:25.098993 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.47s 2026-03-25 04:02:25.099000 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.26s 2026-03-25 04:02:25.099008 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.20s 2026-03-25 04:02:25.099016 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.18s 2026-03-25 04:02:25.099023 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2026-03-25 04:02:25.099031 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.85s 2026-03-25 04:02:25.099040 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.78s 2026-03-25 04:02:25.099047 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.77s 2026-03-25 04:02:25.099054 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.71s 2026-03-25 04:02:25.099062 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.65s 2026-03-25 04:02:25.099069 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.58s 2026-03-25 04:02:25.607437 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-03-25 04:02:25.621017 | orchestrator | + set -e 2026-03-25 04:02:25.621093 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 04:02:25.621101 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 04:02:25.621109 | orchestrator | ++ INTERACTIVE=false 2026-03-25 04:02:25.621118 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 04:02:25.621132 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 04:02:25.621141 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 04:02:25.621150 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 04:02:25.621159 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 04:02:25.621167 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 04:02:25.621176 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 04:02:25.621184 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 04:02:25.621194 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 04:02:25.621204 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 04:02:25.621213 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 04:02:25.621223 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 04:02:25.621232 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 04:02:25.621240 | orchestrator | ++ export ARA=false 2026-03-25 04:02:25.621248 | orchestrator | ++ ARA=false 2026-03-25 04:02:25.621256 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 04:02:25.621267 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 04:02:25.621278 | orchestrator | ++ export TEMPEST=false 2026-03-25 04:02:25.621287 | orchestrator | ++ TEMPEST=false 2026-03-25 04:02:25.621295 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 04:02:25.621303 | orchestrator | ++ IS_ZUUL=true 2026-03-25 04:02:25.621312 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:02:25.621320 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:02:25.621328 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 04:02:25.621337 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 04:02:25.621344 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 04:02:25.621351 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 04:02:25.621388 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 04:02:25.621396 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 04:02:25.621404 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 04:02:25.621411 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 04:02:25.621419 | orchestrator | ++ semver 9.5.0 8.0.0 2026-03-25 04:02:25.686785 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 04:02:25.686912 | orchestrator | + osism apply clusterapi 2026-03-25 04:02:28.138339 | orchestrator | 2026-03-25 04:02:28 | INFO  | Task e0fd24c2-d5a4-47c8-af39-2dde8d931143 (clusterapi) was prepared for execution. 2026-03-25 04:02:28.138460 | orchestrator | 2026-03-25 04:02:28 | INFO  | It takes a moment until task e0fd24c2-d5a4-47c8-af39-2dde8d931143 (clusterapi) has been started and output is visible here. 2026-03-25 04:03:28.571181 | orchestrator | 2026-03-25 04:03:28.571274 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-03-25 04:03:28.571281 | orchestrator | 2026-03-25 04:03:28.571286 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-03-25 04:03:28.571291 | orchestrator | Wednesday 25 March 2026 04:02:33 +0000 (0:00:00.231) 0:00:00.231 ******* 2026-03-25 04:03:28.571296 | orchestrator | included: cert_manager for testbed-manager 2026-03-25 04:03:28.571300 | orchestrator | 2026-03-25 04:03:28.571304 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-03-25 04:03:28.571308 | orchestrator | Wednesday 25 March 2026 04:02:33 +0000 (0:00:00.284) 0:00:00.516 ******* 2026-03-25 04:03:28.571312 | orchestrator | changed: [testbed-manager] 2026-03-25 04:03:28.571320 | orchestrator | 2026-03-25 04:03:28.571326 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-03-25 04:03:28.571333 | orchestrator | Wednesday 25 March 2026 04:02:39 +0000 (0:00:05.677) 0:00:06.193 ******* 2026-03-25 04:03:28.571340 | orchestrator | changed: [testbed-manager] 2026-03-25 04:03:28.571347 | orchestrator | 2026-03-25 04:03:28.571353 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-03-25 04:03:28.571359 | orchestrator | 2026-03-25 04:03:28.571379 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-03-25 04:03:28.571409 | orchestrator | Wednesday 25 March 2026 04:03:07 +0000 (0:00:28.031) 0:00:34.225 ******* 2026-03-25 04:03:28.571417 | orchestrator | ok: [testbed-manager] 2026-03-25 04:03:28.571423 | orchestrator | 2026-03-25 04:03:28.571429 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-03-25 04:03:28.571434 | orchestrator | Wednesday 25 March 2026 04:03:08 +0000 (0:00:01.228) 0:00:35.454 ******* 2026-03-25 04:03:28.571437 | orchestrator | ok: [testbed-manager] 2026-03-25 04:03:28.571441 | orchestrator | 2026-03-25 04:03:28.571446 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-03-25 04:03:28.571450 | orchestrator | Wednesday 25 March 2026 04:03:08 +0000 (0:00:00.163) 0:00:35.617 ******* 2026-03-25 04:03:28.571453 | orchestrator | ok: [testbed-manager] 2026-03-25 04:03:28.571457 | orchestrator | 2026-03-25 04:03:28.571461 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-03-25 04:03:28.571464 | orchestrator | Wednesday 25 March 2026 04:03:25 +0000 (0:00:16.808) 0:00:52.426 ******* 2026-03-25 04:03:28.571468 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:03:28.571472 | orchestrator | 2026-03-25 04:03:28.571476 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-03-25 04:03:28.571479 | orchestrator | Wednesday 25 March 2026 04:03:25 +0000 (0:00:00.161) 0:00:52.588 ******* 2026-03-25 04:03:28.571483 | orchestrator | changed: [testbed-manager] 2026-03-25 04:03:28.571488 | orchestrator | 2026-03-25 04:03:28.571494 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:03:28.571502 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 04:03:28.571508 | orchestrator | 2026-03-25 04:03:28.571514 | orchestrator | 2026-03-25 04:03:28.571521 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:03:28.571550 | orchestrator | Wednesday 25 March 2026 04:03:28 +0000 (0:00:02.239) 0:00:54.828 ******* 2026-03-25 04:03:28.571556 | orchestrator | =============================================================================== 2026-03-25 04:03:28.571561 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 28.03s 2026-03-25 04:03:28.571568 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.81s 2026-03-25 04:03:28.571574 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.68s 2026-03-25 04:03:28.571581 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.24s 2026-03-25 04:03:28.571588 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.23s 2026-03-25 04:03:28.571594 | orchestrator | Include cert_manager role ----------------------------------------------- 0.28s 2026-03-25 04:03:28.571600 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.16s 2026-03-25 04:03:28.571604 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.16s 2026-03-25 04:03:29.024988 | orchestrator | + osism apply magnum 2026-03-25 04:03:31.638111 | orchestrator | 2026-03-25 04:03:31 | INFO  | Task 8f9ecf6b-7a80-448a-b689-47bbfaf9a2ef (magnum) was prepared for execution. 2026-03-25 04:03:31.638187 | orchestrator | 2026-03-25 04:03:31 | INFO  | It takes a moment until task 8f9ecf6b-7a80-448a-b689-47bbfaf9a2ef (magnum) has been started and output is visible here. 2026-03-25 04:04:14.731598 | orchestrator | 2026-03-25 04:04:14.731786 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 04:04:14.731808 | orchestrator | 2026-03-25 04:04:14.731814 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 04:04:14.731820 | orchestrator | Wednesday 25 March 2026 04:03:36 +0000 (0:00:00.330) 0:00:00.330 ******* 2026-03-25 04:04:14.731825 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:04:14.731832 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:04:14.731836 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:04:14.731842 | orchestrator | 2026-03-25 04:04:14.731847 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 04:04:14.731852 | orchestrator | Wednesday 25 March 2026 04:03:37 +0000 (0:00:00.413) 0:00:00.744 ******* 2026-03-25 04:04:14.731857 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-25 04:04:14.731862 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-25 04:04:14.731867 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-25 04:04:14.731872 | orchestrator | 2026-03-25 04:04:14.731877 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-25 04:04:14.731882 | orchestrator | 2026-03-25 04:04:14.731887 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-25 04:04:14.731892 | orchestrator | Wednesday 25 March 2026 04:03:37 +0000 (0:00:00.510) 0:00:01.255 ******* 2026-03-25 04:04:14.731897 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:04:14.731902 | orchestrator | 2026-03-25 04:04:14.731907 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-25 04:04:14.731912 | orchestrator | Wednesday 25 March 2026 04:03:38 +0000 (0:00:00.663) 0:00:01.918 ******* 2026-03-25 04:04:14.731918 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-25 04:04:14.731923 | orchestrator | 2026-03-25 04:04:14.731928 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-25 04:04:14.731932 | orchestrator | Wednesday 25 March 2026 04:03:41 +0000 (0:00:03.589) 0:00:05.509 ******* 2026-03-25 04:04:14.731938 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-25 04:04:14.731943 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-25 04:04:14.731948 | orchestrator | 2026-03-25 04:04:14.731953 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-25 04:04:14.731991 | orchestrator | Wednesday 25 March 2026 04:03:48 +0000 (0:00:06.165) 0:00:11.674 ******* 2026-03-25 04:04:14.732006 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-25 04:04:14.732011 | orchestrator | 2026-03-25 04:04:14.732016 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-25 04:04:14.732021 | orchestrator | Wednesday 25 March 2026 04:03:51 +0000 (0:00:03.478) 0:00:15.152 ******* 2026-03-25 04:04:14.732026 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-25 04:04:14.732031 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-25 04:04:14.732036 | orchestrator | 2026-03-25 04:04:14.732041 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-25 04:04:14.732046 | orchestrator | Wednesday 25 March 2026 04:03:55 +0000 (0:00:03.839) 0:00:18.991 ******* 2026-03-25 04:04:14.732051 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-25 04:04:14.732056 | orchestrator | 2026-03-25 04:04:14.732061 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-25 04:04:14.732066 | orchestrator | Wednesday 25 March 2026 04:03:58 +0000 (0:00:03.187) 0:00:22.179 ******* 2026-03-25 04:04:14.732070 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-25 04:04:14.732075 | orchestrator | 2026-03-25 04:04:14.732080 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-25 04:04:14.732085 | orchestrator | Wednesday 25 March 2026 04:04:02 +0000 (0:00:03.737) 0:00:25.917 ******* 2026-03-25 04:04:14.732090 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:04:14.732094 | orchestrator | 2026-03-25 04:04:14.732099 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-25 04:04:14.732104 | orchestrator | Wednesday 25 March 2026 04:04:05 +0000 (0:00:03.159) 0:00:29.077 ******* 2026-03-25 04:04:14.732109 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:04:14.732114 | orchestrator | 2026-03-25 04:04:14.732119 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-25 04:04:14.732124 | orchestrator | Wednesday 25 March 2026 04:04:09 +0000 (0:00:03.915) 0:00:32.992 ******* 2026-03-25 04:04:14.732129 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:04:14.732133 | orchestrator | 2026-03-25 04:04:14.732139 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-25 04:04:14.732144 | orchestrator | Wednesday 25 March 2026 04:04:12 +0000 (0:00:03.348) 0:00:36.340 ******* 2026-03-25 04:04:14.732169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:14.732178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:14.732193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:14.732199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:14.732206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:14.732217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:22.479842 | orchestrator | 2026-03-25 04:04:22.479926 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-25 04:04:22.479936 | orchestrator | Wednesday 25 March 2026 04:04:14 +0000 (0:00:01.902) 0:00:38.243 ******* 2026-03-25 04:04:22.479943 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:04:22.479951 | orchestrator | 2026-03-25 04:04:22.479958 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-25 04:04:22.479986 | orchestrator | Wednesday 25 March 2026 04:04:14 +0000 (0:00:00.146) 0:00:38.389 ******* 2026-03-25 04:04:22.479992 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:04:22.479996 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:04:22.480000 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:04:22.480004 | orchestrator | 2026-03-25 04:04:22.480009 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-25 04:04:22.480013 | orchestrator | Wednesday 25 March 2026 04:04:15 +0000 (0:00:00.375) 0:00:38.765 ******* 2026-03-25 04:04:22.480017 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 04:04:22.480021 | orchestrator | 2026-03-25 04:04:22.480025 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-25 04:04:22.480030 | orchestrator | Wednesday 25 March 2026 04:04:16 +0000 (0:00:00.955) 0:00:39.721 ******* 2026-03-25 04:04:22.480047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:22.480054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:22.480059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:22.480076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:22.480088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:22.480096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:22.480100 | orchestrator | 2026-03-25 04:04:22.480105 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-25 04:04:22.480109 | orchestrator | Wednesday 25 March 2026 04:04:18 +0000 (0:00:02.440) 0:00:42.161 ******* 2026-03-25 04:04:22.480113 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:04:22.480119 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:04:22.480133 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:04:22.480137 | orchestrator | 2026-03-25 04:04:22.480147 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-25 04:04:22.480151 | orchestrator | Wednesday 25 March 2026 04:04:19 +0000 (0:00:00.627) 0:00:42.788 ******* 2026-03-25 04:04:22.480156 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:04:22.480161 | orchestrator | 2026-03-25 04:04:22.480165 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-25 04:04:22.480169 | orchestrator | Wednesday 25 March 2026 04:04:19 +0000 (0:00:00.630) 0:00:43.419 ******* 2026-03-25 04:04:22.480173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:22.480183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:23.513527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:23.513642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:23.513654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:23.513687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:23.513716 | orchestrator | 2026-03-25 04:04:23.513734 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-25 04:04:23.513743 | orchestrator | Wednesday 25 March 2026 04:04:22 +0000 (0:00:02.577) 0:00:45.997 ******* 2026-03-25 04:04:23.513770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 04:04:23.513779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:04:23.513787 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:04:23.513801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 04:04:23.513809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:04:23.513817 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:04:23.513824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 04:04:23.513846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:04:27.265990 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:04:27.266106 | orchestrator | 2026-03-25 04:04:27.266115 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-25 04:04:27.266121 | orchestrator | Wednesday 25 March 2026 04:04:23 +0000 (0:00:01.025) 0:00:47.022 ******* 2026-03-25 04:04:27.266127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 04:04:27.266146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:04:27.266152 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:04:27.266156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 04:04:27.266175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:04:27.266179 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:04:27.266196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 04:04:27.266204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:04:27.266208 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:04:27.266212 | orchestrator | 2026-03-25 04:04:27.266216 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-25 04:04:27.266220 | orchestrator | Wednesday 25 March 2026 04:04:24 +0000 (0:00:01.042) 0:00:48.065 ******* 2026-03-25 04:04:27.266225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:27.266233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:27.266242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:33.983297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:33.983412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:33.983428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:33.983455 | orchestrator | 2026-03-25 04:04:33.983465 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-25 04:04:33.983475 | orchestrator | Wednesday 25 March 2026 04:04:27 +0000 (0:00:02.718) 0:00:50.783 ******* 2026-03-25 04:04:33.983485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:33.983514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:33.983522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:33.983537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:33.983550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:33.983557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:33.983563 | orchestrator | 2026-03-25 04:04:33.983570 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-25 04:04:33.983576 | orchestrator | Wednesday 25 March 2026 04:04:33 +0000 (0:00:05.938) 0:00:56.722 ******* 2026-03-25 04:04:33.983590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 04:04:36.042297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:04:36.042387 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:04:36.042412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 04:04:36.042438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:04:36.042446 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:04:36.042453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-25 04:04:36.042474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:04:36.042481 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:04:36.042488 | orchestrator | 2026-03-25 04:04:36.042495 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-25 04:04:36.042503 | orchestrator | Wednesday 25 March 2026 04:04:33 +0000 (0:00:00.779) 0:00:57.501 ******* 2026-03-25 04:04:36.042515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:36.042528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:36.042535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-25 04:04:36.042542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:04:36.042556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:05:30.385321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-25 04:05:30.385415 | orchestrator | 2026-03-25 04:05:30.385425 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-25 04:05:30.385434 | orchestrator | Wednesday 25 March 2026 04:04:36 +0000 (0:00:02.053) 0:00:59.554 ******* 2026-03-25 04:05:30.385441 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:05:30.385448 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:05:30.385456 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:05:30.385472 | orchestrator | 2026-03-25 04:05:30.385479 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-25 04:05:30.385485 | orchestrator | Wednesday 25 March 2026 04:04:36 +0000 (0:00:00.610) 0:01:00.165 ******* 2026-03-25 04:05:30.385491 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:05:30.385497 | orchestrator | 2026-03-25 04:05:30.385503 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-25 04:05:30.385509 | orchestrator | Wednesday 25 March 2026 04:04:38 +0000 (0:00:02.181) 0:01:02.347 ******* 2026-03-25 04:05:30.385515 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:05:30.385521 | orchestrator | 2026-03-25 04:05:30.385528 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-25 04:05:30.385534 | orchestrator | Wednesday 25 March 2026 04:04:41 +0000 (0:00:02.262) 0:01:04.609 ******* 2026-03-25 04:05:30.385540 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:05:30.385546 | orchestrator | 2026-03-25 04:05:30.385552 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-25 04:05:30.385559 | orchestrator | Wednesday 25 March 2026 04:04:57 +0000 (0:00:16.178) 0:01:20.788 ******* 2026-03-25 04:05:30.385565 | orchestrator | 2026-03-25 04:05:30.385572 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-25 04:05:30.385610 | orchestrator | Wednesday 25 March 2026 04:04:57 +0000 (0:00:00.105) 0:01:20.894 ******* 2026-03-25 04:05:30.385617 | orchestrator | 2026-03-25 04:05:30.385623 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-25 04:05:30.385630 | orchestrator | Wednesday 25 March 2026 04:04:57 +0000 (0:00:00.084) 0:01:20.978 ******* 2026-03-25 04:05:30.385636 | orchestrator | 2026-03-25 04:05:30.385642 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-25 04:05:30.385649 | orchestrator | Wednesday 25 March 2026 04:04:57 +0000 (0:00:00.084) 0:01:21.063 ******* 2026-03-25 04:05:30.385655 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:05:30.385660 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:05:30.385665 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:05:30.385671 | orchestrator | 2026-03-25 04:05:30.385676 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-25 04:05:30.385682 | orchestrator | Wednesday 25 March 2026 04:05:17 +0000 (0:00:20.062) 0:01:41.126 ******* 2026-03-25 04:05:30.385688 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:05:30.385694 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:05:30.385700 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:05:30.385706 | orchestrator | 2026-03-25 04:05:30.385712 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:05:30.385720 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 04:05:30.385728 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 04:05:30.385735 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-25 04:05:30.385750 | orchestrator | 2026-03-25 04:05:30.385756 | orchestrator | 2026-03-25 04:05:30.385762 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:05:30.385769 | orchestrator | Wednesday 25 March 2026 04:05:29 +0000 (0:00:12.372) 0:01:53.498 ******* 2026-03-25 04:05:30.385775 | orchestrator | =============================================================================== 2026-03-25 04:05:30.385780 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.06s 2026-03-25 04:05:30.385786 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.18s 2026-03-25 04:05:30.385792 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.37s 2026-03-25 04:05:30.385798 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.17s 2026-03-25 04:05:30.385804 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.94s 2026-03-25 04:05:30.385811 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.92s 2026-03-25 04:05:30.385817 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.84s 2026-03-25 04:05:30.385840 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.74s 2026-03-25 04:05:30.385847 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.59s 2026-03-25 04:05:30.385852 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.48s 2026-03-25 04:05:30.385859 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.35s 2026-03-25 04:05:30.385865 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.19s 2026-03-25 04:05:30.385878 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.16s 2026-03-25 04:05:30.385886 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.72s 2026-03-25 04:05:30.385893 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.58s 2026-03-25 04:05:30.385900 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.44s 2026-03-25 04:05:30.385908 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.26s 2026-03-25 04:05:30.385926 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.18s 2026-03-25 04:05:30.385933 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.05s 2026-03-25 04:05:30.385940 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.90s 2026-03-25 04:05:31.105256 | orchestrator | ok: Runtime: 1:45:19.745917 2026-03-25 04:05:31.355313 | 2026-03-25 04:05:31.355463 | TASK [Deploy in a nutshell] 2026-03-25 04:05:31.891075 | orchestrator | skipping: Conditional result was False 2026-03-25 04:05:31.913756 | 2026-03-25 04:05:31.913912 | TASK [Bootstrap services] 2026-03-25 04:05:32.622943 | orchestrator | 2026-03-25 04:05:32.623079 | orchestrator | # BOOTSTRAP 2026-03-25 04:05:32.623094 | orchestrator | 2026-03-25 04:05:32.623101 | orchestrator | + set -e 2026-03-25 04:05:32.623108 | orchestrator | + echo 2026-03-25 04:05:32.623116 | orchestrator | + echo '# BOOTSTRAP' 2026-03-25 04:05:32.623126 | orchestrator | + echo 2026-03-25 04:05:32.623155 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-25 04:05:32.629157 | orchestrator | + set -e 2026-03-25 04:05:32.629250 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-25 04:05:35.391257 | orchestrator | 2026-03-25 04:05:35 | INFO  | It takes a moment until task c15ba239-b896-4266-aadf-6b106d2a9a59 (flavor-manager) has been started and output is visible here. 2026-03-25 04:05:42.919897 | orchestrator | 2026-03-25 04:05:38 | INFO  | Flavor SCS-1L-1 created 2026-03-25 04:05:42.920045 | orchestrator | 2026-03-25 04:05:38 | INFO  | Flavor SCS-1L-1-5 created 2026-03-25 04:05:42.920062 | orchestrator | 2026-03-25 04:05:39 | INFO  | Flavor SCS-1V-2 created 2026-03-25 04:05:42.920070 | orchestrator | 2026-03-25 04:05:39 | INFO  | Flavor SCS-1V-2-5 created 2026-03-25 04:05:42.920979 | orchestrator | 2026-03-25 04:05:39 | INFO  | Flavor SCS-1V-4 created 2026-03-25 04:05:42.921058 | orchestrator | 2026-03-25 04:05:39 | INFO  | Flavor SCS-1V-4-10 created 2026-03-25 04:05:42.921071 | orchestrator | 2026-03-25 04:05:39 | INFO  | Flavor SCS-1V-8 created 2026-03-25 04:05:42.921082 | orchestrator | 2026-03-25 04:05:39 | INFO  | Flavor SCS-1V-8-20 created 2026-03-25 04:05:42.921107 | orchestrator | 2026-03-25 04:05:39 | INFO  | Flavor SCS-2V-4 created 2026-03-25 04:05:42.921117 | orchestrator | 2026-03-25 04:05:40 | INFO  | Flavor SCS-2V-4-10 created 2026-03-25 04:05:42.921125 | orchestrator | 2026-03-25 04:05:40 | INFO  | Flavor SCS-2V-8 created 2026-03-25 04:05:42.921133 | orchestrator | 2026-03-25 04:05:40 | INFO  | Flavor SCS-2V-8-20 created 2026-03-25 04:05:42.921141 | orchestrator | 2026-03-25 04:05:40 | INFO  | Flavor SCS-2V-16 created 2026-03-25 04:05:42.921149 | orchestrator | 2026-03-25 04:05:40 | INFO  | Flavor SCS-2V-16-50 created 2026-03-25 04:05:42.921157 | orchestrator | 2026-03-25 04:05:40 | INFO  | Flavor SCS-4V-8 created 2026-03-25 04:05:42.921165 | orchestrator | 2026-03-25 04:05:40 | INFO  | Flavor SCS-4V-8-20 created 2026-03-25 04:05:42.921173 | orchestrator | 2026-03-25 04:05:40 | INFO  | Flavor SCS-4V-16 created 2026-03-25 04:05:42.921179 | orchestrator | 2026-03-25 04:05:41 | INFO  | Flavor SCS-4V-16-50 created 2026-03-25 04:05:42.921184 | orchestrator | 2026-03-25 04:05:41 | INFO  | Flavor SCS-4V-32 created 2026-03-25 04:05:42.921189 | orchestrator | 2026-03-25 04:05:41 | INFO  | Flavor SCS-4V-32-100 created 2026-03-25 04:05:42.921194 | orchestrator | 2026-03-25 04:05:41 | INFO  | Flavor SCS-8V-16 created 2026-03-25 04:05:42.921199 | orchestrator | 2026-03-25 04:05:41 | INFO  | Flavor SCS-8V-16-50 created 2026-03-25 04:05:42.921204 | orchestrator | 2026-03-25 04:05:41 | INFO  | Flavor SCS-8V-32 created 2026-03-25 04:05:42.921209 | orchestrator | 2026-03-25 04:05:41 | INFO  | Flavor SCS-8V-32-100 created 2026-03-25 04:05:42.921214 | orchestrator | 2026-03-25 04:05:42 | INFO  | Flavor SCS-16V-32 created 2026-03-25 04:05:42.921219 | orchestrator | 2026-03-25 04:05:42 | INFO  | Flavor SCS-16V-32-100 created 2026-03-25 04:05:42.921224 | orchestrator | 2026-03-25 04:05:42 | INFO  | Flavor SCS-2V-4-20s created 2026-03-25 04:05:42.921229 | orchestrator | 2026-03-25 04:05:42 | INFO  | Flavor SCS-4V-8-50s created 2026-03-25 04:05:42.921234 | orchestrator | 2026-03-25 04:05:42 | INFO  | Flavor SCS-8V-32-100s created 2026-03-25 04:05:45.834476 | orchestrator | 2026-03-25 04:05:45 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-25 04:05:55.930500 | orchestrator | 2026-03-25 04:05:55 | INFO  | Task ecb1c4e0-be0b-452a-8298-29fda9219abe (bootstrap-basic) was prepared for execution. 2026-03-25 04:05:55.930660 | orchestrator | 2026-03-25 04:05:55 | INFO  | It takes a moment until task ecb1c4e0-be0b-452a-8298-29fda9219abe (bootstrap-basic) has been started and output is visible here. 2026-03-25 04:06:46.002838 | orchestrator | 2026-03-25 04:06:46.002960 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-25 04:06:46.002976 | orchestrator | 2026-03-25 04:06:46.002987 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 04:06:46.002998 | orchestrator | Wednesday 25 March 2026 04:06:01 +0000 (0:00:00.097) 0:00:00.097 ******* 2026-03-25 04:06:46.003008 | orchestrator | ok: [localhost] 2026-03-25 04:06:46.003018 | orchestrator | 2026-03-25 04:06:46.003028 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-25 04:06:46.003038 | orchestrator | Wednesday 25 March 2026 04:06:03 +0000 (0:00:02.122) 0:00:02.219 ******* 2026-03-25 04:06:46.003048 | orchestrator | ok: [localhost] 2026-03-25 04:06:46.003058 | orchestrator | 2026-03-25 04:06:46.003068 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-25 04:06:46.003078 | orchestrator | Wednesday 25 March 2026 04:06:11 +0000 (0:00:08.089) 0:00:10.309 ******* 2026-03-25 04:06:46.003088 | orchestrator | changed: [localhost] 2026-03-25 04:06:46.003098 | orchestrator | 2026-03-25 04:06:46.003107 | orchestrator | TASK [Create public network] *************************************************** 2026-03-25 04:06:46.003117 | orchestrator | Wednesday 25 March 2026 04:06:19 +0000 (0:00:07.263) 0:00:17.572 ******* 2026-03-25 04:06:46.003127 | orchestrator | changed: [localhost] 2026-03-25 04:06:46.003137 | orchestrator | 2026-03-25 04:06:46.003146 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-25 04:06:46.003156 | orchestrator | Wednesday 25 March 2026 04:06:24 +0000 (0:00:05.547) 0:00:23.120 ******* 2026-03-25 04:06:46.003171 | orchestrator | changed: [localhost] 2026-03-25 04:06:46.003181 | orchestrator | 2026-03-25 04:06:46.003191 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-25 04:06:46.003200 | orchestrator | Wednesday 25 March 2026 04:06:32 +0000 (0:00:07.533) 0:00:30.653 ******* 2026-03-25 04:06:46.003210 | orchestrator | changed: [localhost] 2026-03-25 04:06:46.003219 | orchestrator | 2026-03-25 04:06:46.003229 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-25 04:06:46.003239 | orchestrator | Wednesday 25 March 2026 04:06:37 +0000 (0:00:04.990) 0:00:35.644 ******* 2026-03-25 04:06:46.003248 | orchestrator | changed: [localhost] 2026-03-25 04:06:46.003258 | orchestrator | 2026-03-25 04:06:46.003268 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-25 04:06:46.003287 | orchestrator | Wednesday 25 March 2026 04:06:41 +0000 (0:00:04.394) 0:00:40.039 ******* 2026-03-25 04:06:46.003298 | orchestrator | ok: [localhost] 2026-03-25 04:06:46.003307 | orchestrator | 2026-03-25 04:06:46.003317 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:06:46.003328 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 04:06:46.003340 | orchestrator | 2026-03-25 04:06:46.003352 | orchestrator | 2026-03-25 04:06:46.003363 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:06:46.003375 | orchestrator | Wednesday 25 March 2026 04:06:45 +0000 (0:00:03.996) 0:00:44.036 ******* 2026-03-25 04:06:46.003387 | orchestrator | =============================================================================== 2026-03-25 04:06:46.003400 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.09s 2026-03-25 04:06:46.003411 | orchestrator | Set public network to default ------------------------------------------- 7.53s 2026-03-25 04:06:46.003423 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.26s 2026-03-25 04:06:46.003435 | orchestrator | Create public network --------------------------------------------------- 5.55s 2026-03-25 04:06:46.003468 | orchestrator | Create public subnet ---------------------------------------------------- 4.99s 2026-03-25 04:06:46.003481 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.39s 2026-03-25 04:06:46.003539 | orchestrator | Create manager role ----------------------------------------------------- 4.00s 2026-03-25 04:06:46.003554 | orchestrator | Gathering Facts --------------------------------------------------------- 2.12s 2026-03-25 04:06:48.743156 | orchestrator | 2026-03-25 04:06:48 | INFO  | It takes a moment until task 5903994f-058b-4907-8ad1-5991607b338d (image-manager) has been started and output is visible here. 2026-03-25 04:07:30.205544 | orchestrator | 2026-03-25 04:06:51 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-25 04:07:30.205654 | orchestrator | 2026-03-25 04:06:51 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-25 04:07:30.205667 | orchestrator | 2026-03-25 04:06:51 | INFO  | Importing image Cirros 0.6.2 2026-03-25 04:07:30.205674 | orchestrator | 2026-03-25 04:06:51 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-25 04:07:30.205683 | orchestrator | 2026-03-25 04:06:53 | INFO  | Waiting for image to leave queued state... 2026-03-25 04:07:30.205691 | orchestrator | 2026-03-25 04:06:56 | INFO  | Waiting for import to complete... 2026-03-25 04:07:30.205698 | orchestrator | 2026-03-25 04:07:06 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-25 04:07:30.205706 | orchestrator | 2026-03-25 04:07:06 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-25 04:07:30.205713 | orchestrator | 2026-03-25 04:07:06 | INFO  | Setting internal_version = 0.6.2 2026-03-25 04:07:30.205720 | orchestrator | 2026-03-25 04:07:06 | INFO  | Setting image_original_user = cirros 2026-03-25 04:07:30.205728 | orchestrator | 2026-03-25 04:07:06 | INFO  | Adding tag os:cirros 2026-03-25 04:07:30.205734 | orchestrator | 2026-03-25 04:07:06 | INFO  | Setting property architecture: x86_64 2026-03-25 04:07:30.205741 | orchestrator | 2026-03-25 04:07:06 | INFO  | Setting property hw_disk_bus: scsi 2026-03-25 04:07:30.205747 | orchestrator | 2026-03-25 04:07:07 | INFO  | Setting property hw_rng_model: virtio 2026-03-25 04:07:30.205754 | orchestrator | 2026-03-25 04:07:07 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-25 04:07:30.205761 | orchestrator | 2026-03-25 04:07:07 | INFO  | Setting property hw_watchdog_action: reset 2026-03-25 04:07:30.205768 | orchestrator | 2026-03-25 04:07:07 | INFO  | Setting property hypervisor_type: qemu 2026-03-25 04:07:30.205776 | orchestrator | 2026-03-25 04:07:07 | INFO  | Setting property os_distro: cirros 2026-03-25 04:07:30.205782 | orchestrator | 2026-03-25 04:07:08 | INFO  | Setting property os_purpose: minimal 2026-03-25 04:07:30.205789 | orchestrator | 2026-03-25 04:07:08 | INFO  | Setting property replace_frequency: never 2026-03-25 04:07:30.205795 | orchestrator | 2026-03-25 04:07:08 | INFO  | Setting property uuid_validity: none 2026-03-25 04:07:30.205800 | orchestrator | 2026-03-25 04:07:08 | INFO  | Setting property provided_until: none 2026-03-25 04:07:30.205807 | orchestrator | 2026-03-25 04:07:09 | INFO  | Setting property image_description: Cirros 2026-03-25 04:07:30.205813 | orchestrator | 2026-03-25 04:07:09 | INFO  | Setting property image_name: Cirros 2026-03-25 04:07:30.205819 | orchestrator | 2026-03-25 04:07:09 | INFO  | Setting property internal_version: 0.6.2 2026-03-25 04:07:30.205825 | orchestrator | 2026-03-25 04:07:09 | INFO  | Setting property image_original_user: cirros 2026-03-25 04:07:30.205853 | orchestrator | 2026-03-25 04:07:10 | INFO  | Setting property os_version: 0.6.2 2026-03-25 04:07:30.205868 | orchestrator | 2026-03-25 04:07:10 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-25 04:07:30.205876 | orchestrator | 2026-03-25 04:07:10 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-25 04:07:30.205881 | orchestrator | 2026-03-25 04:07:10 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-25 04:07:30.205887 | orchestrator | 2026-03-25 04:07:10 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-25 04:07:30.205893 | orchestrator | 2026-03-25 04:07:10 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-25 04:07:30.205899 | orchestrator | 2026-03-25 04:07:10 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-25 04:07:30.205909 | orchestrator | 2026-03-25 04:07:11 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-25 04:07:30.205917 | orchestrator | 2026-03-25 04:07:11 | INFO  | Importing image Cirros 0.6.3 2026-03-25 04:07:30.205924 | orchestrator | 2026-03-25 04:07:11 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-25 04:07:30.205929 | orchestrator | 2026-03-25 04:07:11 | INFO  | Waiting for image to leave queued state... 2026-03-25 04:07:30.205933 | orchestrator | 2026-03-25 04:07:13 | INFO  | Waiting for import to complete... 2026-03-25 04:07:30.205949 | orchestrator | 2026-03-25 04:07:23 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-25 04:07:30.205953 | orchestrator | 2026-03-25 04:07:24 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-25 04:07:30.205957 | orchestrator | 2026-03-25 04:07:24 | INFO  | Setting internal_version = 0.6.3 2026-03-25 04:07:30.205961 | orchestrator | 2026-03-25 04:07:24 | INFO  | Setting image_original_user = cirros 2026-03-25 04:07:30.205965 | orchestrator | 2026-03-25 04:07:24 | INFO  | Adding tag os:cirros 2026-03-25 04:07:30.205968 | orchestrator | 2026-03-25 04:07:24 | INFO  | Setting property architecture: x86_64 2026-03-25 04:07:30.205972 | orchestrator | 2026-03-25 04:07:24 | INFO  | Setting property hw_disk_bus: scsi 2026-03-25 04:07:30.205976 | orchestrator | 2026-03-25 04:07:25 | INFO  | Setting property hw_rng_model: virtio 2026-03-25 04:07:30.205979 | orchestrator | 2026-03-25 04:07:25 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-25 04:07:30.205984 | orchestrator | 2026-03-25 04:07:25 | INFO  | Setting property hw_watchdog_action: reset 2026-03-25 04:07:30.205989 | orchestrator | 2026-03-25 04:07:26 | INFO  | Setting property hypervisor_type: qemu 2026-03-25 04:07:30.205994 | orchestrator | 2026-03-25 04:07:26 | INFO  | Setting property os_distro: cirros 2026-03-25 04:07:30.205998 | orchestrator | 2026-03-25 04:07:26 | INFO  | Setting property os_purpose: minimal 2026-03-25 04:07:30.206003 | orchestrator | 2026-03-25 04:07:26 | INFO  | Setting property replace_frequency: never 2026-03-25 04:07:30.206007 | orchestrator | 2026-03-25 04:07:27 | INFO  | Setting property uuid_validity: none 2026-03-25 04:07:30.206012 | orchestrator | 2026-03-25 04:07:27 | INFO  | Setting property provided_until: none 2026-03-25 04:07:30.206066 | orchestrator | 2026-03-25 04:07:27 | INFO  | Setting property image_description: Cirros 2026-03-25 04:07:30.206074 | orchestrator | 2026-03-25 04:07:27 | INFO  | Setting property image_name: Cirros 2026-03-25 04:07:30.206081 | orchestrator | 2026-03-25 04:07:27 | INFO  | Setting property internal_version: 0.6.3 2026-03-25 04:07:30.206094 | orchestrator | 2026-03-25 04:07:28 | INFO  | Setting property image_original_user: cirros 2026-03-25 04:07:30.206102 | orchestrator | 2026-03-25 04:07:28 | INFO  | Setting property os_version: 0.6.3 2026-03-25 04:07:30.206120 | orchestrator | 2026-03-25 04:07:28 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-25 04:07:30.206134 | orchestrator | 2026-03-25 04:07:28 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-25 04:07:30.206140 | orchestrator | 2026-03-25 04:07:29 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-25 04:07:30.206145 | orchestrator | 2026-03-25 04:07:29 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-25 04:07:30.206149 | orchestrator | 2026-03-25 04:07:29 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-25 04:07:30.679759 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-25 04:07:33.173263 | orchestrator | 2026-03-25 04:07:33 | INFO  | date: 2026-03-25 2026-03-25 04:07:33.173422 | orchestrator | 2026-03-25 04:07:33 | INFO  | image: octavia-amphora-haproxy-2024.2.20260325.qcow2 2026-03-25 04:07:33.173651 | orchestrator | 2026-03-25 04:07:33 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260325.qcow2 2026-03-25 04:07:33.173671 | orchestrator | 2026-03-25 04:07:33 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260325.qcow2.CHECKSUM 2026-03-25 04:07:33.334139 | orchestrator | 2026-03-25 04:07:33 | INFO  | checksum: f2f8449674bd8e10efa26a8fa32510b5d22ff5071a700ee358cb258c34941998 2026-03-25 04:07:33.417566 | orchestrator | 2026-03-25 04:07:33 | INFO  | It takes a moment until task e6e62300-bb88-4d6e-80c5-b4abf37d230f (image-manager) has been started and output is visible here. 2026-03-25 04:08:45.739072 | orchestrator | 2026-03-25 04:07:35 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-25' 2026-03-25 04:08:45.739199 | orchestrator | 2026-03-25 04:07:35 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260325.qcow2: 200 2026-03-25 04:08:45.739218 | orchestrator | 2026-03-25 04:07:35 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-25 2026-03-25 04:08:45.739230 | orchestrator | 2026-03-25 04:07:35 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260325.qcow2 2026-03-25 04:08:45.739242 | orchestrator | 2026-03-25 04:07:37 | INFO  | Waiting for image to leave queued state... 2026-03-25 04:08:45.739253 | orchestrator | 2026-03-25 04:07:39 | INFO  | Waiting for import to complete... 2026-03-25 04:08:45.739265 | orchestrator | 2026-03-25 04:07:49 | INFO  | Waiting for import to complete... 2026-03-25 04:08:45.739276 | orchestrator | 2026-03-25 04:07:59 | INFO  | Waiting for import to complete... 2026-03-25 04:08:45.739287 | orchestrator | 2026-03-25 04:08:09 | INFO  | Waiting for import to complete... 2026-03-25 04:08:45.739300 | orchestrator | 2026-03-25 04:08:19 | INFO  | Waiting for import to complete... 2026-03-25 04:08:45.739312 | orchestrator | 2026-03-25 04:08:29 | INFO  | Waiting for import to complete... 2026-03-25 04:08:45.739323 | orchestrator | 2026-03-25 04:08:40 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-25' successfully completed, reloading images 2026-03-25 04:08:45.739334 | orchestrator | 2026-03-25 04:08:40 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-25' 2026-03-25 04:08:45.739427 | orchestrator | 2026-03-25 04:08:40 | INFO  | Setting internal_version = 2026-03-25 2026-03-25 04:08:45.739441 | orchestrator | 2026-03-25 04:08:40 | INFO  | Setting image_original_user = ubuntu 2026-03-25 04:08:45.739451 | orchestrator | 2026-03-25 04:08:40 | INFO  | Adding tag amphora 2026-03-25 04:08:45.739462 | orchestrator | 2026-03-25 04:08:40 | INFO  | Adding tag os:ubuntu 2026-03-25 04:08:45.739471 | orchestrator | 2026-03-25 04:08:41 | INFO  | Setting property architecture: x86_64 2026-03-25 04:08:45.739480 | orchestrator | 2026-03-25 04:08:41 | INFO  | Setting property hw_disk_bus: scsi 2026-03-25 04:08:45.739489 | orchestrator | 2026-03-25 04:08:41 | INFO  | Setting property hw_rng_model: virtio 2026-03-25 04:08:45.739499 | orchestrator | 2026-03-25 04:08:41 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-25 04:08:45.739508 | orchestrator | 2026-03-25 04:08:42 | INFO  | Setting property hw_watchdog_action: reset 2026-03-25 04:08:45.739518 | orchestrator | 2026-03-25 04:08:42 | INFO  | Setting property hypervisor_type: qemu 2026-03-25 04:08:45.739527 | orchestrator | 2026-03-25 04:08:42 | INFO  | Setting property os_distro: ubuntu 2026-03-25 04:08:45.739537 | orchestrator | 2026-03-25 04:08:42 | INFO  | Setting property replace_frequency: quarterly 2026-03-25 04:08:45.739549 | orchestrator | 2026-03-25 04:08:42 | INFO  | Setting property uuid_validity: last-1 2026-03-25 04:08:45.739559 | orchestrator | 2026-03-25 04:08:43 | INFO  | Setting property provided_until: none 2026-03-25 04:08:45.739570 | orchestrator | 2026-03-25 04:08:43 | INFO  | Setting property os_purpose: network 2026-03-25 04:08:45.739600 | orchestrator | 2026-03-25 04:08:43 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-25 04:08:45.739611 | orchestrator | 2026-03-25 04:08:43 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-25 04:08:45.739622 | orchestrator | 2026-03-25 04:08:44 | INFO  | Setting property internal_version: 2026-03-25 2026-03-25 04:08:45.739632 | orchestrator | 2026-03-25 04:08:44 | INFO  | Setting property image_original_user: ubuntu 2026-03-25 04:08:45.739642 | orchestrator | 2026-03-25 04:08:44 | INFO  | Setting property os_version: 2026-03-25 2026-03-25 04:08:45.739653 | orchestrator | 2026-03-25 04:08:44 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260325.qcow2 2026-03-25 04:08:45.739663 | orchestrator | 2026-03-25 04:08:45 | INFO  | Setting property image_build_date: 2026-03-25 2026-03-25 04:08:45.739672 | orchestrator | 2026-03-25 04:08:45 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-25' 2026-03-25 04:08:45.739681 | orchestrator | 2026-03-25 04:08:45 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-25' 2026-03-25 04:08:45.739716 | orchestrator | 2026-03-25 04:08:45 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-25 04:08:45.739727 | orchestrator | 2026-03-25 04:08:45 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-25 04:08:45.739738 | orchestrator | 2026-03-25 04:08:45 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-25 04:08:45.739748 | orchestrator | 2026-03-25 04:08:45 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-25 04:08:46.586241 | orchestrator | ok: Runtime: 0:03:13.918132 2026-03-25 04:08:46.604227 | 2026-03-25 04:08:46.604402 | TASK [Run checks] 2026-03-25 04:08:47.392365 | orchestrator | + set -e 2026-03-25 04:08:47.392543 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 04:08:47.392554 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 04:08:47.392563 | orchestrator | ++ INTERACTIVE=false 2026-03-25 04:08:47.392569 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 04:08:47.392573 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 04:08:47.392579 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-25 04:08:47.392709 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-25 04:08:47.396866 | orchestrator | 2026-03-25 04:08:47.396964 | orchestrator | # CHECK 2026-03-25 04:08:47.396972 | orchestrator | 2026-03-25 04:08:47.396978 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 04:08:47.396987 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 04:08:47.396992 | orchestrator | + echo 2026-03-25 04:08:47.396997 | orchestrator | + echo '# CHECK' 2026-03-25 04:08:47.397004 | orchestrator | + echo 2026-03-25 04:08:47.397015 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-25 04:08:47.397034 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-25 04:08:47.436675 | orchestrator | 2026-03-25 04:08:47.436752 | orchestrator | ## Containers @ testbed-manager 2026-03-25 04:08:47.436758 | orchestrator | 2026-03-25 04:08:47.436764 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-25 04:08:47.436769 | orchestrator | + echo 2026-03-25 04:08:47.436774 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-25 04:08:47.436779 | orchestrator | + echo 2026-03-25 04:08:47.436783 | orchestrator | + osism container testbed-manager ps 2026-03-25 04:08:49.844536 | orchestrator | 2026-03-25 04:08:49 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-25 04:08:50.192655 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-25 04:08:50.192770 | orchestrator | 601aeaa90f2b registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-03-25 04:08:50.192788 | orchestrator | bce2641b3700 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-03-25 04:08:50.192795 | orchestrator | 4371350e0324 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-25 04:08:50.192805 | orchestrator | 8c9f8835b676 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-25 04:08:50.192814 | orchestrator | cda29cd6de85 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-03-25 04:08:50.192844 | orchestrator | 99b10c0e4efd registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 59 minutes ago Up 58 minutes cephclient 2026-03-25 04:08:50.192855 | orchestrator | 040fa59101a7 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-25 04:08:50.192865 | orchestrator | 95c020107be2 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-25 04:08:50.192898 | orchestrator | 7f6af81f0738 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-25 04:08:50.192909 | orchestrator | b612710119e7 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-03-25 04:08:50.192919 | orchestrator | 54e00f931037 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-03-25 04:08:50.192927 | orchestrator | 683afad60c9b registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-03-25 04:08:50.192938 | orchestrator | 514e5f20e467 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-03-25 04:08:50.192948 | orchestrator | 0152475be867 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-25 04:08:50.192977 | orchestrator | 75e37bccce54 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-03-25 04:08:50.192996 | orchestrator | 8edf7e253564 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-03-25 04:08:50.193006 | orchestrator | f5f614fffa57 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-03-25 04:08:50.193016 | orchestrator | e29f7e1c4987 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-03-25 04:08:50.193026 | orchestrator | d0b3f76cba77 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-03-25 04:08:50.193032 | orchestrator | 79f9dc317fc3 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-03-25 04:08:50.193037 | orchestrator | 79c76066a876 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-25 04:08:50.193043 | orchestrator | e402adb8135d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-03-25 04:08:50.193048 | orchestrator | b78a7000e8f2 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-03-25 04:08:50.193060 | orchestrator | f61ddd4581d5 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-03-25 04:08:50.193066 | orchestrator | fd31a6892f94 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-03-25 04:08:50.193075 | orchestrator | 39b147d1b944 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-25 04:08:50.193088 | orchestrator | bf06d688f90d registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-03-25 04:08:50.193100 | orchestrator | 9b84f1493822 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-03-25 04:08:50.193109 | orchestrator | 3a90547746de registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-03-25 04:08:50.193136 | orchestrator | 629218225b22 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-25 04:08:50.676791 | orchestrator | 2026-03-25 04:08:50.676892 | orchestrator | ## Images @ testbed-manager 2026-03-25 04:08:50.676905 | orchestrator | 2026-03-25 04:08:50.676913 | orchestrator | + echo 2026-03-25 04:08:50.676922 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-25 04:08:50.676932 | orchestrator | + echo 2026-03-25 04:08:50.676946 | orchestrator | + osism container testbed-manager images 2026-03-25 04:08:53.312343 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-25 04:08:53.312509 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 6f062d557b80 24 hours ago 239MB 2026-03-25 04:08:53.312524 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 weeks ago 41.4MB 2026-03-25 04:08:53.312533 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-25 04:08:53.312541 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-25 04:08:53.312550 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-25 04:08:53.312562 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-25 04:08:53.312573 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-25 04:08:53.312588 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-25 04:08:53.312601 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-25 04:08:53.312632 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-25 04:08:53.312641 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-25 04:08:53.312649 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-25 04:08:53.312657 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-25 04:08:53.312665 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-25 04:08:53.312673 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-25 04:08:53.312681 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-25 04:08:53.312689 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-25 04:08:53.312696 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-25 04:08:53.312705 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-25 04:08:53.312714 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-25 04:08:53.312722 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-25 04:08:53.312730 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-25 04:08:53.312738 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-25 04:08:53.312746 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-25 04:08:53.312754 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-03-25 04:08:53.728783 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-25 04:08:53.729033 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-25 04:08:53.782105 | orchestrator | 2026-03-25 04:08:53.782202 | orchestrator | ## Containers @ testbed-node-0 2026-03-25 04:08:53.782215 | orchestrator | 2026-03-25 04:08:53.782233 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-25 04:08:53.782243 | orchestrator | + echo 2026-03-25 04:08:53.782251 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-25 04:08:53.782257 | orchestrator | + echo 2026-03-25 04:08:53.782261 | orchestrator | + osism container testbed-node-0 ps 2026-03-25 04:08:56.707193 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-25 04:08:56.707321 | orchestrator | f6606888ebc7 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-25 04:08:56.707344 | orchestrator | de3bdd81ecba registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-25 04:08:56.707349 | orchestrator | 2a1371b1da88 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-25 04:08:56.707353 | orchestrator | 2edc40435e54 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-25 04:08:56.707401 | orchestrator | b49695ffed99 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-25 04:08:56.707406 | orchestrator | a72e460477f1 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-25 04:08:56.707414 | orchestrator | 00be67d5546d registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-25 04:08:56.707418 | orchestrator | ffed5ecf7e48 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-25 04:08:56.707422 | orchestrator | 469a8ac221e3 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-25 04:08:56.707426 | orchestrator | bf39d7f715d4 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-25 04:08:56.707430 | orchestrator | 5cdfca169b38 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-25 04:08:56.707433 | orchestrator | 86fd15de9739 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-25 04:08:56.707437 | orchestrator | 8b2f451f593e registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-25 04:08:56.707441 | orchestrator | e76108c223a1 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-25 04:08:56.707445 | orchestrator | bae4598699d3 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-25 04:08:56.707448 | orchestrator | 9da6122c6973 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-25 04:08:56.707452 | orchestrator | 87f78b294952 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-25 04:08:56.707456 | orchestrator | d62b45c5794d registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-25 04:08:56.707460 | orchestrator | 3fe436f6b8f8 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-25 04:08:56.707479 | orchestrator | 4f4bed0e9d4f registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-25 04:08:56.707484 | orchestrator | 7c10cecad61c registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-25 04:08:56.707488 | orchestrator | 7f9976ddc856 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-25 04:08:56.707495 | orchestrator | deb7113a2a86 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-03-25 04:08:56.707499 | orchestrator | b715c3e72c48 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-25 04:08:56.707503 | orchestrator | b8902e87b954 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-25 04:08:56.707510 | orchestrator | 75522d3fb723 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-25 04:08:56.707513 | orchestrator | 5b2c6997e373 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-25 04:08:56.707517 | orchestrator | 03a1925fc90b registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-25 04:08:56.707521 | orchestrator | 1e55c0a301c5 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-03-25 04:08:56.707525 | orchestrator | 630b305273b7 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-25 04:08:56.707529 | orchestrator | 1c261ed86fff registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-03-25 04:08:56.707532 | orchestrator | fc8fb544d021 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-25 04:08:56.707536 | orchestrator | 61ef31e774ea registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-25 04:08:56.707540 | orchestrator | c19b0db32ad3 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-03-25 04:08:56.707544 | orchestrator | 053f9738c5f8 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-25 04:08:56.707547 | orchestrator | e3786995971f registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-25 04:08:56.707551 | orchestrator | ae2455f9f5cc registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-25 04:08:56.707555 | orchestrator | 9ce43ab59116 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-25 04:08:56.707559 | orchestrator | 07d85ccf06fc registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-25 04:08:56.707574 | orchestrator | d7f131fcde83 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-25 04:08:56.707582 | orchestrator | 29685dfba27a registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-25 04:08:56.707586 | orchestrator | 8d50295a07c1 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-25 04:08:56.707592 | orchestrator | 48fa2dbb025c registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-03-25 04:08:56.707596 | orchestrator | d765a57a0eb1 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-25 04:08:56.707600 | orchestrator | f213b884a538 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-25 04:08:56.707604 | orchestrator | d5dc50e13ae2 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-03-25 04:08:56.707607 | orchestrator | f078191344a5 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-03-25 04:08:56.707611 | orchestrator | 086a066e5b2f registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-03-25 04:08:56.707615 | orchestrator | 4dbfb7fff38e registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-03-25 04:08:56.707619 | orchestrator | 4c4ceb1b2f5e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-0 2026-03-25 04:08:56.707622 | orchestrator | f36aee8c20c9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-03-25 04:08:56.707626 | orchestrator | 928ffe0e6efa registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-03-25 04:08:56.707630 | orchestrator | db77abba2a82 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-25 04:08:56.707634 | orchestrator | 5f40c2812a9d registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-25 04:08:56.707637 | orchestrator | e075757c9330 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-25 04:08:56.707641 | orchestrator | 5b46d5698f05 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-25 04:08:56.707648 | orchestrator | af16722eb0d8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-25 04:08:56.707651 | orchestrator | 1f9a498995d5 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-25 04:08:56.707659 | orchestrator | 0e4c654a06a8 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-25 04:08:56.707665 | orchestrator | 457070c7c936 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-25 04:08:56.707669 | orchestrator | bb2f39a45fdf registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-25 04:08:56.707673 | orchestrator | 180734bda47c registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-25 04:08:56.707677 | orchestrator | cf5261fc5612 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-25 04:08:56.707681 | orchestrator | 97cf6975f9fa registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-03-25 04:08:56.707684 | orchestrator | bc3b03e48e38 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-25 04:08:56.707688 | orchestrator | 0b75e8eebb67 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-25 04:08:56.707692 | orchestrator | 2e9f94bcedf1 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-25 04:08:56.707696 | orchestrator | 3a1da082974f registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-25 04:08:56.707699 | orchestrator | 153b787837c6 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-25 04:08:56.707703 | orchestrator | db08c599053f registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-25 04:08:56.707707 | orchestrator | 2597b300ea85 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-25 04:08:57.148429 | orchestrator | 2026-03-25 04:08:57.148524 | orchestrator | ## Images @ testbed-node-0 2026-03-25 04:08:57.148535 | orchestrator | 2026-03-25 04:08:57.148542 | orchestrator | + echo 2026-03-25 04:08:57.148549 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-25 04:08:57.148557 | orchestrator | + echo 2026-03-25 04:08:57.148564 | orchestrator | + osism container testbed-node-0 images 2026-03-25 04:09:00.005093 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-25 04:09:00.005326 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-25 04:09:00.005345 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-25 04:09:00.005464 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-25 04:09:00.005478 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-25 04:09:00.005507 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-25 04:09:00.005514 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-25 04:09:00.005523 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-25 04:09:00.005531 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-25 04:09:00.005538 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-25 04:09:00.005546 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-25 04:09:00.005563 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-25 04:09:00.005569 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-25 04:09:00.005576 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-25 04:09:00.005581 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-25 04:09:00.005587 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-25 04:09:00.005592 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-25 04:09:00.005599 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-25 04:09:00.005605 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-25 04:09:00.005611 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-25 04:09:00.005617 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-25 04:09:00.005623 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-25 04:09:00.005630 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-25 04:09:00.005638 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-25 04:09:00.005645 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-25 04:09:00.005657 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-25 04:09:00.005665 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-25 04:09:00.005672 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-25 04:09:00.005685 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-25 04:09:00.005691 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-25 04:09:00.005697 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-25 04:09:00.005713 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-25 04:09:00.005744 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-25 04:09:00.005766 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-25 04:09:00.005782 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-25 04:09:00.005790 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-25 04:09:00.005796 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-25 04:09:00.005804 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-25 04:09:00.005812 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-25 04:09:00.005819 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-25 04:09:00.005826 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-25 04:09:00.005833 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-25 04:09:00.005841 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-25 04:09:00.005848 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-25 04:09:00.005857 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-25 04:09:00.005864 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-25 04:09:00.005871 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-25 04:09:00.005877 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-25 04:09:00.005883 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-25 04:09:00.005889 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-25 04:09:00.005895 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-25 04:09:00.005900 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-25 04:09:00.005906 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-25 04:09:00.005912 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-25 04:09:00.005918 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-25 04:09:00.005923 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-25 04:09:00.005929 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-25 04:09:00.005960 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-25 04:09:00.005966 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-25 04:09:00.005982 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-25 04:09:00.005988 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-25 04:09:00.005994 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-25 04:09:00.006000 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-25 04:09:00.006006 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-25 04:09:00.006113 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-25 04:09:00.006123 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-25 04:09:00.006129 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-25 04:09:00.006135 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-25 04:09:00.006142 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-25 04:09:00.006148 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-25 04:09:00.484251 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-25 04:09:00.484404 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-25 04:09:00.532384 | orchestrator | 2026-03-25 04:09:00.532478 | orchestrator | ## Containers @ testbed-node-1 2026-03-25 04:09:00.532492 | orchestrator | 2026-03-25 04:09:00.532497 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-25 04:09:00.532502 | orchestrator | + echo 2026-03-25 04:09:00.532506 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-25 04:09:00.532512 | orchestrator | + echo 2026-03-25 04:09:00.532516 | orchestrator | + osism container testbed-node-1 ps 2026-03-25 04:09:03.269443 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-25 04:09:03.269628 | orchestrator | f1b6633afb6d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-25 04:09:03.269641 | orchestrator | 12dee839a2e6 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-25 04:09:03.269647 | orchestrator | ec1c10775dc7 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-25 04:09:03.269652 | orchestrator | 85283cd7869f registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-25 04:09:03.269659 | orchestrator | 2bfdcd5beed6 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-03-25 04:09:03.269665 | orchestrator | 4c0ca1ed1b37 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-25 04:09:03.269688 | orchestrator | 630bc28292f3 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-25 04:09:03.269694 | orchestrator | 8ffc3c186ffe registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-25 04:09:03.269699 | orchestrator | 1f8a02d4720d registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-03-25 04:09:03.269704 | orchestrator | 37d277f92728 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-25 04:09:03.269709 | orchestrator | 2a68b944e8e1 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-25 04:09:03.269714 | orchestrator | d090537066ee registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-25 04:09:03.269732 | orchestrator | e88adedd8285 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-25 04:09:03.269737 | orchestrator | 35351089c1cf registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-25 04:09:03.269742 | orchestrator | 815b6191f60a registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-25 04:09:03.269747 | orchestrator | 607b1826b614 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-03-25 04:09:03.269752 | orchestrator | 0ac83f934673 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-25 04:09:03.269757 | orchestrator | 02563ea2bad9 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-25 04:09:03.269762 | orchestrator | b04ec45c451c registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-25 04:09:03.269782 | orchestrator | d00ea018167d registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-25 04:09:03.269788 | orchestrator | 8f5b7a0141a6 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-25 04:09:03.269793 | orchestrator | 0b1ebd9c0d81 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-25 04:09:03.269797 | orchestrator | 5231a9c2ed62 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-03-25 04:09:03.269807 | orchestrator | d44cbadd8302 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-25 04:09:03.269811 | orchestrator | 7c7a39b34b48 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-25 04:09:03.269816 | orchestrator | 7982646f6aab registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-25 04:09:03.269821 | orchestrator | 811db2f8fbbe registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-25 04:09:03.269826 | orchestrator | c641278b3e81 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-03-25 04:09:03.269831 | orchestrator | fbb129a9c152 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-03-25 04:09:03.269836 | orchestrator | 7d61805e0f06 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-25 04:09:03.269840 | orchestrator | 40b56430c972 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-03-25 04:09:03.269845 | orchestrator | 781a21b3bca8 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-25 04:09:03.269850 | orchestrator | 064547e25ecb registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-25 04:09:03.269857 | orchestrator | e0f90d1184f3 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-03-25 04:09:03.269864 | orchestrator | a34c1c24921f registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-25 04:09:03.269871 | orchestrator | 4a7ca4ba9949 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-25 04:09:03.269883 | orchestrator | ae104f1b02ad registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-25 04:09:03.269899 | orchestrator | eb10bd09ec95 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-25 04:09:03.269907 | orchestrator | 5afa3e9d3dac registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-25 04:09:03.269922 | orchestrator | 8fddae2777b0 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-03-25 04:09:03.269929 | orchestrator | c40dbffcfa4d registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-25 04:09:03.269942 | orchestrator | 0cb12390cdc1 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-25 04:09:03.269951 | orchestrator | 67daee25e1a4 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-03-25 04:09:03.269958 | orchestrator | 9a34aef3b5d8 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-25 04:09:03.269965 | orchestrator | 7695691d5d61 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-03-25 04:09:03.269973 | orchestrator | 04ec18f8f363 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-03-25 04:09:03.269981 | orchestrator | 5f28980f34e6 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-03-25 04:09:03.269988 | orchestrator | 29bab75a067f registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-03-25 04:09:03.269996 | orchestrator | b2f401a2b7f7 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-03-25 04:09:03.270003 | orchestrator | c99d6a833796 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-1 2026-03-25 04:09:03.270011 | orchestrator | 0c9d82f59947 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-03-25 04:09:03.270063 | orchestrator | cb4e3d9a68a8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-03-25 04:09:03.270071 | orchestrator | 7af91e40148d registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-25 04:09:03.270079 | orchestrator | b572768a17e0 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-25 04:09:03.270087 | orchestrator | 0e4a99e7a98c registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-25 04:09:03.270096 | orchestrator | fd132d41197d registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-25 04:09:03.270105 | orchestrator | 02c053d01cae registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-25 04:09:03.270113 | orchestrator | 85340d543c19 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-25 04:09:03.270121 | orchestrator | a0504dc30c48 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-25 04:09:03.270144 | orchestrator | 885e8d95e457 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-25 04:09:03.270154 | orchestrator | 65fcb71d5d25 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-25 04:09:03.270163 | orchestrator | 4419d672b00a registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-25 04:09:03.270172 | orchestrator | 2377f237ca93 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-25 04:09:03.270181 | orchestrator | 93af89f38f08 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-25 04:09:03.270189 | orchestrator | 1e723141d5c5 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-25 04:09:03.270202 | orchestrator | f476ec467ec2 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-25 04:09:03.270212 | orchestrator | 79bd72f5e526 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-25 04:09:03.270220 | orchestrator | 67465871eeeb registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-25 04:09:03.270229 | orchestrator | fcbe770b7be7 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-25 04:09:03.270241 | orchestrator | c95b4a61300c registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-25 04:09:03.270249 | orchestrator | 83e551b5ac78 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-25 04:09:03.713341 | orchestrator | 2026-03-25 04:09:03.713458 | orchestrator | ## Images @ testbed-node-1 2026-03-25 04:09:03.713466 | orchestrator | 2026-03-25 04:09:03.713471 | orchestrator | + echo 2026-03-25 04:09:03.713475 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-25 04:09:03.713480 | orchestrator | + echo 2026-03-25 04:09:03.713484 | orchestrator | + osism container testbed-node-1 images 2026-03-25 04:09:06.574478 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-25 04:09:06.574568 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-25 04:09:06.574578 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-25 04:09:06.574587 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-25 04:09:06.574596 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-25 04:09:06.574604 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-25 04:09:06.574633 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-25 04:09:06.574641 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-25 04:09:06.574649 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-25 04:09:06.574656 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-25 04:09:06.574663 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-25 04:09:06.574671 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-25 04:09:06.574678 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-25 04:09:06.574685 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-25 04:09:06.574692 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-25 04:09:06.574700 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-25 04:09:06.574707 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-25 04:09:06.574714 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-25 04:09:06.574722 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-25 04:09:06.574729 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-25 04:09:06.574736 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-25 04:09:06.574743 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-25 04:09:06.574751 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-25 04:09:06.574758 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-25 04:09:06.574765 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-25 04:09:06.574773 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-25 04:09:06.574780 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-25 04:09:06.574787 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-25 04:09:06.574795 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-25 04:09:06.574802 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-25 04:09:06.574810 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-25 04:09:06.574817 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-25 04:09:06.574840 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-25 04:09:06.574854 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-25 04:09:06.574861 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-25 04:09:06.574868 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-25 04:09:06.574876 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-25 04:09:06.574883 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-25 04:09:06.574890 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-25 04:09:06.574897 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-25 04:09:06.574920 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-25 04:09:06.574931 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-25 04:09:06.574943 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-25 04:09:06.574955 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-25 04:09:06.574966 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-25 04:09:06.574977 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-25 04:09:06.574988 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-25 04:09:06.575000 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-25 04:09:06.575012 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-25 04:09:06.575024 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-25 04:09:06.575035 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-25 04:09:06.575046 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-25 04:09:06.575056 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-25 04:09:06.575067 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-25 04:09:06.575079 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-25 04:09:06.575091 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-25 04:09:06.575104 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-25 04:09:06.575116 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-25 04:09:06.575129 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-25 04:09:06.575202 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-25 04:09:06.575218 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-25 04:09:06.575227 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-25 04:09:06.575236 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-25 04:09:06.575243 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-25 04:09:06.575258 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-25 04:09:06.575266 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-25 04:09:06.575273 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-25 04:09:06.575280 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-25 04:09:06.575287 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-25 04:09:06.575295 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-25 04:09:06.989816 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-25 04:09:06.989902 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-25 04:09:07.045287 | orchestrator | 2026-03-25 04:09:07.045428 | orchestrator | ## Containers @ testbed-node-2 2026-03-25 04:09:07.045442 | orchestrator | 2026-03-25 04:09:07.045450 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-25 04:09:07.045456 | orchestrator | + echo 2026-03-25 04:09:07.045462 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-25 04:09:07.045469 | orchestrator | + echo 2026-03-25 04:09:07.045475 | orchestrator | + osism container testbed-node-2 ps 2026-03-25 04:09:09.795943 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-25 04:09:09.796039 | orchestrator | c9439343e8e6 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-25 04:09:09.796051 | orchestrator | 2892661a9807 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-25 04:09:09.796058 | orchestrator | 9c525af11617 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-25 04:09:09.796064 | orchestrator | dc155dd6efc2 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-25 04:09:09.796074 | orchestrator | 9f01ecb44b5e registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-03-25 04:09:09.796116 | orchestrator | 2ae16bb6b35d registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-25 04:09:09.796121 | orchestrator | 865a1a357762 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-25 04:09:09.796289 | orchestrator | ebaac11b1ded registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-25 04:09:09.796300 | orchestrator | 0efed4544ee9 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-03-25 04:09:09.796307 | orchestrator | 7e1dbcfcfd53 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-25 04:09:09.796315 | orchestrator | d6ef8e1058b5 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-25 04:09:09.796324 | orchestrator | f2dfcfae15f6 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-25 04:09:09.796333 | orchestrator | f55d3dc9fb6f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-25 04:09:09.796339 | orchestrator | 6a09f1a44168 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-25 04:09:09.796408 | orchestrator | 2a784d73e69f registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-25 04:09:09.796418 | orchestrator | 7499b3054c01 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-03-25 04:09:09.796425 | orchestrator | 1ea453d20e07 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-25 04:09:09.796432 | orchestrator | fb8605c5ab43 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-25 04:09:09.796438 | orchestrator | 40cc159081a8 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-25 04:09:09.796444 | orchestrator | 40a3069eac4c registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-25 04:09:09.796450 | orchestrator | 5ddf2767fa55 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-25 04:09:09.796456 | orchestrator | 86ac68d9c4f3 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-25 04:09:09.796462 | orchestrator | af7e2485f78b registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-03-25 04:09:09.796469 | orchestrator | c5a2d05f34c4 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-25 04:09:09.796475 | orchestrator | d0dbfb7c819b registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-25 04:09:09.796490 | orchestrator | ddd93ecd1419 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-25 04:09:09.796496 | orchestrator | b9247350ee2f registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-25 04:09:09.796514 | orchestrator | fa4fd897942a registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-03-25 04:09:09.796520 | orchestrator | c9cb65466400 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-03-25 04:09:09.796526 | orchestrator | e2411830ec17 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-03-25 04:09:09.796532 | orchestrator | af1dfd041212 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-03-25 04:09:09.796538 | orchestrator | 018e4b0abcaf registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-25 04:09:09.796561 | orchestrator | 6ac5bf440ad2 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-25 04:09:09.796571 | orchestrator | 74719e6a44af registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-03-25 04:09:09.796577 | orchestrator | f9545d50c207 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-25 04:09:09.796585 | orchestrator | 716d204beb94 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-25 04:09:09.796594 | orchestrator | 3f6445f9ee07 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-25 04:09:09.796600 | orchestrator | f8b709e839f8 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-25 04:09:09.796606 | orchestrator | b9dcbb0b2a37 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-25 04:09:09.796613 | orchestrator | 6b28bc44ec44 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 38 minutes (healthy) horizon 2026-03-25 04:09:09.796618 | orchestrator | 98f7d4b02a8f registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-25 04:09:09.796624 | orchestrator | a9175d2169bc registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-25 04:09:09.796645 | orchestrator | ce2c68668c37 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-03-25 04:09:09.796651 | orchestrator | ba7bf5e42875 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-25 04:09:09.796657 | orchestrator | 7470316ebcdd registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-25 04:09:09.796664 | orchestrator | c713e456ef72 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-03-25 04:09:09.796669 | orchestrator | 9cf1549a9d30 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-03-25 04:09:09.796679 | orchestrator | e3807593f6bb registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-03-25 04:09:09.796691 | orchestrator | f8bae428f3d4 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-03-25 04:09:09.796697 | orchestrator | f17c87080cac registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-2 2026-03-25 04:09:09.796703 | orchestrator | 47ebb1b86d68 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-03-25 04:09:09.796709 | orchestrator | 90e526f29e10 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-03-25 04:09:09.796715 | orchestrator | 7b710222ccdf registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-25 04:09:09.796721 | orchestrator | 4ac02c1877e0 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-25 04:09:09.796736 | orchestrator | fa2881c18032 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-25 04:09:09.796743 | orchestrator | bbd12dd2d5df registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-25 04:09:09.796749 | orchestrator | fd99d2d2b0d0 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-25 04:09:09.796755 | orchestrator | 9ad5459645a7 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-25 04:09:09.796761 | orchestrator | 95b51dc2ecb9 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-25 04:09:09.796767 | orchestrator | 5cf57ba1ab5b registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-25 04:09:09.796781 | orchestrator | ac4428cbecad registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-25 04:09:09.796789 | orchestrator | a0802add3155 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-25 04:09:09.796795 | orchestrator | b0a90c6d48a3 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-25 04:09:09.796801 | orchestrator | 0c7d9c8499b3 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-25 04:09:09.796807 | orchestrator | 310430cec697 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-25 04:09:09.796813 | orchestrator | f3ade35b0a8f registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-25 04:09:09.796818 | orchestrator | 56141746e5c3 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-25 04:09:09.796824 | orchestrator | ae0e9d46881f registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-25 04:09:09.796839 | orchestrator | 1bc4ae4be29f registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-25 04:09:09.796846 | orchestrator | 3397312b78d4 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-25 04:09:09.796852 | orchestrator | 402a5f2e37ee registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-25 04:09:10.250741 | orchestrator | 2026-03-25 04:09:10.250811 | orchestrator | ## Images @ testbed-node-2 2026-03-25 04:09:10.250817 | orchestrator | 2026-03-25 04:09:10.250821 | orchestrator | + echo 2026-03-25 04:09:10.250826 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-25 04:09:10.250831 | orchestrator | + echo 2026-03-25 04:09:10.250836 | orchestrator | + osism container testbed-node-2 images 2026-03-25 04:09:13.107563 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-25 04:09:13.107635 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-25 04:09:13.107654 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-25 04:09:13.107659 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-25 04:09:13.107663 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-25 04:09:13.107667 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-25 04:09:13.107671 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-25 04:09:13.107675 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-25 04:09:13.107693 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-25 04:09:13.107697 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-25 04:09:13.107701 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-25 04:09:13.107750 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-25 04:09:13.107755 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-25 04:09:13.107759 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-25 04:09:13.107763 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-25 04:09:13.107767 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-25 04:09:13.107770 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-25 04:09:13.107774 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-25 04:09:13.107778 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-25 04:09:13.107782 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-25 04:09:13.107785 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-25 04:09:13.107789 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-25 04:09:13.107793 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-25 04:09:13.107797 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-25 04:09:13.107800 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-25 04:09:13.107805 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-25 04:09:13.107809 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-25 04:09:13.107812 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-25 04:09:13.107816 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-25 04:09:13.107820 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-25 04:09:13.107823 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-25 04:09:13.107827 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-25 04:09:13.107842 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-25 04:09:13.107846 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-25 04:09:13.107850 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-25 04:09:13.107857 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-25 04:09:13.107862 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-25 04:09:13.107866 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-25 04:09:13.107869 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-25 04:09:13.107873 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-25 04:09:13.107877 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-25 04:09:13.107881 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-25 04:09:13.107885 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-25 04:09:13.107888 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-25 04:09:13.107897 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-25 04:09:13.107901 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-25 04:09:13.107905 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-25 04:09:13.107908 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-25 04:09:13.107912 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-25 04:09:13.107916 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-25 04:09:13.107920 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-25 04:09:13.107923 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-25 04:09:13.107927 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-25 04:09:13.107931 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-25 04:09:13.107935 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-25 04:09:13.107938 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-25 04:09:13.107942 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-25 04:09:13.107946 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-25 04:09:13.107950 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-25 04:09:13.107954 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-25 04:09:13.107957 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-25 04:09:13.107964 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-25 04:09:13.107968 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-25 04:09:13.107971 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-25 04:09:13.107980 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-25 04:09:13.107986 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-25 04:09:13.107992 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-25 04:09:13.108002 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-25 04:09:13.108011 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-25 04:09:13.108017 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-25 04:09:13.566921 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-25 04:09:13.575812 | orchestrator | + set -e 2026-03-25 04:09:13.575924 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 04:09:13.575948 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 04:09:13.576503 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 04:09:13.576568 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 04:09:13.576580 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 04:09:13.576592 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 04:09:13.576604 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 04:09:13.576615 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 04:09:13.576626 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 04:09:13.576637 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 04:09:13.576647 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 04:09:13.576658 | orchestrator | ++ export ARA=false 2026-03-25 04:09:13.576670 | orchestrator | ++ ARA=false 2026-03-25 04:09:13.576677 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 04:09:13.576684 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 04:09:13.576691 | orchestrator | ++ export TEMPEST=false 2026-03-25 04:09:13.576697 | orchestrator | ++ TEMPEST=false 2026-03-25 04:09:13.576704 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 04:09:13.576714 | orchestrator | ++ IS_ZUUL=true 2026-03-25 04:09:13.576724 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:09:13.576760 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:09:13.576784 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 04:09:13.576795 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 04:09:13.576805 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 04:09:13.576814 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 04:09:13.576825 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 04:09:13.576836 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 04:09:13.576846 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 04:09:13.576856 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 04:09:13.576867 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-25 04:09:13.576877 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-25 04:09:13.584864 | orchestrator | + set -e 2026-03-25 04:09:13.584936 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 04:09:13.584945 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 04:09:13.584953 | orchestrator | ++ INTERACTIVE=false 2026-03-25 04:09:13.584960 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 04:09:13.584967 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 04:09:13.584973 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-25 04:09:13.585022 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-25 04:09:13.590557 | orchestrator | 2026-03-25 04:09:13.590643 | orchestrator | # Ceph status 2026-03-25 04:09:13.590651 | orchestrator | 2026-03-25 04:09:13.590678 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 04:09:13.590686 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 04:09:13.590693 | orchestrator | + echo 2026-03-25 04:09:13.590700 | orchestrator | + echo '# Ceph status' 2026-03-25 04:09:13.590706 | orchestrator | + echo 2026-03-25 04:09:13.590712 | orchestrator | + ceph -s 2026-03-25 04:09:14.259451 | orchestrator | cluster: 2026-03-25 04:09:14.259523 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-25 04:09:14.259531 | orchestrator | health: HEALTH_OK 2026-03-25 04:09:14.259536 | orchestrator | 2026-03-25 04:09:14.259541 | orchestrator | services: 2026-03-25 04:09:14.259545 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 70m) 2026-03-25 04:09:14.259551 | orchestrator | mgr: testbed-node-0(active, since 57m), standbys: testbed-node-2, testbed-node-1 2026-03-25 04:09:14.259557 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-25 04:09:14.259561 | orchestrator | osd: 6 osds: 6 up (since 66m), 6 in (since 67m) 2026-03-25 04:09:14.259566 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-25 04:09:14.259569 | orchestrator | 2026-03-25 04:09:14.259573 | orchestrator | data: 2026-03-25 04:09:14.259578 | orchestrator | volumes: 1/1 healthy 2026-03-25 04:09:14.259582 | orchestrator | pools: 14 pools, 401 pgs 2026-03-25 04:09:14.259586 | orchestrator | objects: 556 objects, 2.2 GiB 2026-03-25 04:09:14.259590 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-25 04:09:14.259593 | orchestrator | pgs: 401 active+clean 2026-03-25 04:09:14.259597 | orchestrator | 2026-03-25 04:09:14.316121 | orchestrator | 2026-03-25 04:09:14.316192 | orchestrator | # Ceph versions 2026-03-25 04:09:14.316198 | orchestrator | 2026-03-25 04:09:14.316202 | orchestrator | + echo 2026-03-25 04:09:14.316207 | orchestrator | + echo '# Ceph versions' 2026-03-25 04:09:14.316211 | orchestrator | + echo 2026-03-25 04:09:14.316215 | orchestrator | + ceph versions 2026-03-25 04:09:14.961283 | orchestrator | { 2026-03-25 04:09:14.961553 | orchestrator | "mon": { 2026-03-25 04:09:14.961594 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-25 04:09:14.962403 | orchestrator | }, 2026-03-25 04:09:14.962462 | orchestrator | "mgr": { 2026-03-25 04:09:14.962471 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-25 04:09:14.962477 | orchestrator | }, 2026-03-25 04:09:14.962482 | orchestrator | "osd": { 2026-03-25 04:09:14.962487 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-25 04:09:14.962491 | orchestrator | }, 2026-03-25 04:09:14.962496 | orchestrator | "mds": { 2026-03-25 04:09:14.962502 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-25 04:09:14.962506 | orchestrator | }, 2026-03-25 04:09:14.962511 | orchestrator | "rgw": { 2026-03-25 04:09:14.962516 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-25 04:09:14.962520 | orchestrator | }, 2026-03-25 04:09:14.962525 | orchestrator | "overall": { 2026-03-25 04:09:14.962530 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-25 04:09:14.962535 | orchestrator | } 2026-03-25 04:09:14.962540 | orchestrator | } 2026-03-25 04:09:15.020979 | orchestrator | 2026-03-25 04:09:15.021079 | orchestrator | # Ceph OSD tree 2026-03-25 04:09:15.021092 | orchestrator | 2026-03-25 04:09:15.021103 | orchestrator | + echo 2026-03-25 04:09:15.021115 | orchestrator | + echo '# Ceph OSD tree' 2026-03-25 04:09:15.021126 | orchestrator | + echo 2026-03-25 04:09:15.021136 | orchestrator | + ceph osd df tree 2026-03-25 04:09:15.677641 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-25 04:09:15.677731 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 406 MiB 113 GiB 5.90 1.00 - root default 2026-03-25 04:09:15.677740 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 0.99 - host testbed-node-3 2026-03-25 04:09:15.677747 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.77 1.15 201 up osd.0 2026-03-25 04:09:15.677753 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1016 MiB 955 MiB 1 KiB 62 MiB 19 GiB 4.97 0.84 189 up osd.5 2026-03-25 04:09:15.677760 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-03-25 04:09:15.677789 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.69 0.97 190 up osd.1 2026-03-25 04:09:15.677795 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 78 MiB 19 GiB 6.14 1.04 202 up osd.4 2026-03-25 04:09:15.677801 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-03-25 04:09:15.677808 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 19 GiB 7.26 1.23 188 up osd.2 2026-03-25 04:09:15.677814 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 932 MiB 867 MiB 1 KiB 66 MiB 19 GiB 4.56 0.77 200 up osd.3 2026-03-25 04:09:15.677820 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 406 MiB 113 GiB 5.90 2026-03-25 04:09:15.677826 | orchestrator | MIN/MAX VAR: 0.77/1.23 STDDEV: 0.95 2026-03-25 04:09:15.725973 | orchestrator | 2026-03-25 04:09:15.726095 | orchestrator | # Ceph monitor status 2026-03-25 04:09:15.726104 | orchestrator | 2026-03-25 04:09:15.726109 | orchestrator | + echo 2026-03-25 04:09:15.726113 | orchestrator | + echo '# Ceph monitor status' 2026-03-25 04:09:15.726117 | orchestrator | + echo 2026-03-25 04:09:15.726122 | orchestrator | + ceph mon stat 2026-03-25 04:09:16.349706 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-25 04:09:16.394600 | orchestrator | 2026-03-25 04:09:16.394703 | orchestrator | # Ceph quorum status 2026-03-25 04:09:16.394720 | orchestrator | 2026-03-25 04:09:16.394731 | orchestrator | + echo 2026-03-25 04:09:16.394741 | orchestrator | + echo '# Ceph quorum status' 2026-03-25 04:09:16.394752 | orchestrator | + echo 2026-03-25 04:09:16.395261 | orchestrator | + ceph quorum_status 2026-03-25 04:09:16.395318 | orchestrator | + jq 2026-03-25 04:09:17.078692 | orchestrator | { 2026-03-25 04:09:17.078767 | orchestrator | "election_epoch": 8, 2026-03-25 04:09:17.078775 | orchestrator | "quorum": [ 2026-03-25 04:09:17.078779 | orchestrator | 0, 2026-03-25 04:09:17.078783 | orchestrator | 1, 2026-03-25 04:09:17.078787 | orchestrator | 2 2026-03-25 04:09:17.078791 | orchestrator | ], 2026-03-25 04:09:17.078795 | orchestrator | "quorum_names": [ 2026-03-25 04:09:17.078799 | orchestrator | "testbed-node-0", 2026-03-25 04:09:17.078803 | orchestrator | "testbed-node-1", 2026-03-25 04:09:17.078807 | orchestrator | "testbed-node-2" 2026-03-25 04:09:17.078811 | orchestrator | ], 2026-03-25 04:09:17.078815 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-25 04:09:17.078820 | orchestrator | "quorum_age": 4220, 2026-03-25 04:09:17.078824 | orchestrator | "features": { 2026-03-25 04:09:17.078828 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-25 04:09:17.078832 | orchestrator | "quorum_mon": [ 2026-03-25 04:09:17.078837 | orchestrator | "kraken", 2026-03-25 04:09:17.078843 | orchestrator | "luminous", 2026-03-25 04:09:17.078850 | orchestrator | "mimic", 2026-03-25 04:09:17.078856 | orchestrator | "osdmap-prune", 2026-03-25 04:09:17.078862 | orchestrator | "nautilus", 2026-03-25 04:09:17.078867 | orchestrator | "octopus", 2026-03-25 04:09:17.078873 | orchestrator | "pacific", 2026-03-25 04:09:17.078879 | orchestrator | "elector-pinging", 2026-03-25 04:09:17.078886 | orchestrator | "quincy", 2026-03-25 04:09:17.078892 | orchestrator | "reef" 2026-03-25 04:09:17.078898 | orchestrator | ] 2026-03-25 04:09:17.078904 | orchestrator | }, 2026-03-25 04:09:17.078911 | orchestrator | "monmap": { 2026-03-25 04:09:17.078917 | orchestrator | "epoch": 1, 2026-03-25 04:09:17.078923 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-25 04:09:17.078930 | orchestrator | "modified": "2026-03-25T02:58:38.715089Z", 2026-03-25 04:09:17.078936 | orchestrator | "created": "2026-03-25T02:58:38.715089Z", 2026-03-25 04:09:17.078943 | orchestrator | "min_mon_release": 18, 2026-03-25 04:09:17.078948 | orchestrator | "min_mon_release_name": "reef", 2026-03-25 04:09:17.078954 | orchestrator | "election_strategy": 1, 2026-03-25 04:09:17.078960 | orchestrator | "disallowed_leaders: ": "", 2026-03-25 04:09:17.078966 | orchestrator | "stretch_mode": false, 2026-03-25 04:09:17.078972 | orchestrator | "tiebreaker_mon": "", 2026-03-25 04:09:17.079013 | orchestrator | "removed_ranks: ": "", 2026-03-25 04:09:17.079027 | orchestrator | "features": { 2026-03-25 04:09:17.079034 | orchestrator | "persistent": [ 2026-03-25 04:09:17.079040 | orchestrator | "kraken", 2026-03-25 04:09:17.079046 | orchestrator | "luminous", 2026-03-25 04:09:17.079054 | orchestrator | "mimic", 2026-03-25 04:09:17.079058 | orchestrator | "osdmap-prune", 2026-03-25 04:09:17.079063 | orchestrator | "nautilus", 2026-03-25 04:09:17.079069 | orchestrator | "octopus", 2026-03-25 04:09:17.079075 | orchestrator | "pacific", 2026-03-25 04:09:17.079081 | orchestrator | "elector-pinging", 2026-03-25 04:09:17.079086 | orchestrator | "quincy", 2026-03-25 04:09:17.079092 | orchestrator | "reef" 2026-03-25 04:09:17.079098 | orchestrator | ], 2026-03-25 04:09:17.079104 | orchestrator | "optional": [] 2026-03-25 04:09:17.079110 | orchestrator | }, 2026-03-25 04:09:17.079116 | orchestrator | "mons": [ 2026-03-25 04:09:17.079122 | orchestrator | { 2026-03-25 04:09:17.079127 | orchestrator | "rank": 0, 2026-03-25 04:09:17.079181 | orchestrator | "name": "testbed-node-0", 2026-03-25 04:09:17.079187 | orchestrator | "public_addrs": { 2026-03-25 04:09:17.079193 | orchestrator | "addrvec": [ 2026-03-25 04:09:17.079202 | orchestrator | { 2026-03-25 04:09:17.079219 | orchestrator | "type": "v2", 2026-03-25 04:09:17.079233 | orchestrator | "addr": "192.168.16.8:3300", 2026-03-25 04:09:17.079240 | orchestrator | "nonce": 0 2026-03-25 04:09:17.079246 | orchestrator | }, 2026-03-25 04:09:17.079253 | orchestrator | { 2026-03-25 04:09:17.079259 | orchestrator | "type": "v1", 2026-03-25 04:09:17.079265 | orchestrator | "addr": "192.168.16.8:6789", 2026-03-25 04:09:17.079272 | orchestrator | "nonce": 0 2026-03-25 04:09:17.079278 | orchestrator | } 2026-03-25 04:09:17.079284 | orchestrator | ] 2026-03-25 04:09:17.079290 | orchestrator | }, 2026-03-25 04:09:17.079326 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-03-25 04:09:17.079336 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-03-25 04:09:17.079358 | orchestrator | "priority": 0, 2026-03-25 04:09:17.079365 | orchestrator | "weight": 0, 2026-03-25 04:09:17.079370 | orchestrator | "crush_location": "{}" 2026-03-25 04:09:17.079376 | orchestrator | }, 2026-03-25 04:09:17.079381 | orchestrator | { 2026-03-25 04:09:17.079386 | orchestrator | "rank": 1, 2026-03-25 04:09:17.079392 | orchestrator | "name": "testbed-node-1", 2026-03-25 04:09:17.079397 | orchestrator | "public_addrs": { 2026-03-25 04:09:17.079403 | orchestrator | "addrvec": [ 2026-03-25 04:09:17.079409 | orchestrator | { 2026-03-25 04:09:17.079416 | orchestrator | "type": "v2", 2026-03-25 04:09:17.079437 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-25 04:09:17.079442 | orchestrator | "nonce": 0 2026-03-25 04:09:17.079447 | orchestrator | }, 2026-03-25 04:09:17.079451 | orchestrator | { 2026-03-25 04:09:17.079456 | orchestrator | "type": "v1", 2026-03-25 04:09:17.079460 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-25 04:09:17.079465 | orchestrator | "nonce": 0 2026-03-25 04:09:17.079469 | orchestrator | } 2026-03-25 04:09:17.079474 | orchestrator | ] 2026-03-25 04:09:17.079478 | orchestrator | }, 2026-03-25 04:09:17.079482 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-25 04:09:17.079487 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-25 04:09:17.079492 | orchestrator | "priority": 0, 2026-03-25 04:09:17.079496 | orchestrator | "weight": 0, 2026-03-25 04:09:17.079501 | orchestrator | "crush_location": "{}" 2026-03-25 04:09:17.079505 | orchestrator | }, 2026-03-25 04:09:17.079509 | orchestrator | { 2026-03-25 04:09:17.079514 | orchestrator | "rank": 2, 2026-03-25 04:09:17.079518 | orchestrator | "name": "testbed-node-2", 2026-03-25 04:09:17.079522 | orchestrator | "public_addrs": { 2026-03-25 04:09:17.079527 | orchestrator | "addrvec": [ 2026-03-25 04:09:17.079531 | orchestrator | { 2026-03-25 04:09:17.079535 | orchestrator | "type": "v2", 2026-03-25 04:09:17.079540 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-25 04:09:17.079544 | orchestrator | "nonce": 0 2026-03-25 04:09:17.079549 | orchestrator | }, 2026-03-25 04:09:17.079553 | orchestrator | { 2026-03-25 04:09:17.079557 | orchestrator | "type": "v1", 2026-03-25 04:09:17.079561 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-25 04:09:17.079566 | orchestrator | "nonce": 0 2026-03-25 04:09:17.079571 | orchestrator | } 2026-03-25 04:09:17.079575 | orchestrator | ] 2026-03-25 04:09:17.079625 | orchestrator | }, 2026-03-25 04:09:17.079629 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-25 04:09:17.079633 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-25 04:09:17.079637 | orchestrator | "priority": 0, 2026-03-25 04:09:17.079640 | orchestrator | "weight": 0, 2026-03-25 04:09:17.079644 | orchestrator | "crush_location": "{}" 2026-03-25 04:09:17.079650 | orchestrator | } 2026-03-25 04:09:17.079657 | orchestrator | ] 2026-03-25 04:09:17.079663 | orchestrator | } 2026-03-25 04:09:17.079669 | orchestrator | } 2026-03-25 04:09:17.079784 | orchestrator | 2026-03-25 04:09:17.079796 | orchestrator | # Ceph free space status 2026-03-25 04:09:17.079801 | orchestrator | 2026-03-25 04:09:17.079813 | orchestrator | + echo 2026-03-25 04:09:17.079819 | orchestrator | + echo '# Ceph free space status' 2026-03-25 04:09:17.079825 | orchestrator | + echo 2026-03-25 04:09:17.079832 | orchestrator | + ceph df 2026-03-25 04:09:17.735552 | orchestrator | --- RAW STORAGE --- 2026-03-25 04:09:17.735621 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-25 04:09:17.735637 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-03-25 04:09:17.735643 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-03-25 04:09:17.735647 | orchestrator | 2026-03-25 04:09:17.735651 | orchestrator | --- POOLS --- 2026-03-25 04:09:17.735656 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-25 04:09:17.735661 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-25 04:09:17.735665 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-25 04:09:17.735669 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-25 04:09:17.735673 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-25 04:09:17.735677 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-25 04:09:17.735681 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-25 04:09:17.735685 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-25 04:09:17.735689 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-25 04:09:17.735692 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-03-25 04:09:17.735696 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-25 04:09:17.735700 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-25 04:09:17.735704 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2026-03-25 04:09:17.735707 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-25 04:09:17.735711 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-25 04:09:17.784876 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-25 04:09:17.827001 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-25 04:09:17.827075 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-03-25 04:09:17.827084 | orchestrator | + osism apply facts 2026-03-25 04:09:20.270265 | orchestrator | 2026-03-25 04:09:20 | INFO  | Task 4547c0b1-ca31-4eaf-a433-69313711f5b4 (facts) was prepared for execution. 2026-03-25 04:09:20.270466 | orchestrator | 2026-03-25 04:09:20 | INFO  | It takes a moment until task 4547c0b1-ca31-4eaf-a433-69313711f5b4 (facts) has been started and output is visible here. 2026-03-25 04:09:35.412130 | orchestrator | 2026-03-25 04:09:35.412209 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-25 04:09:35.412216 | orchestrator | 2026-03-25 04:09:35.412221 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-25 04:09:35.412225 | orchestrator | Wednesday 25 March 2026 04:09:25 +0000 (0:00:00.355) 0:00:00.355 ******* 2026-03-25 04:09:35.412230 | orchestrator | ok: [testbed-manager] 2026-03-25 04:09:35.412235 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:09:35.412239 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:09:35.412243 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:09:35.412247 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:09:35.412251 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:09:35.412274 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:09:35.412278 | orchestrator | 2026-03-25 04:09:35.412282 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-25 04:09:35.412286 | orchestrator | Wednesday 25 March 2026 04:09:26 +0000 (0:00:01.204) 0:00:01.559 ******* 2026-03-25 04:09:35.412290 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:09:35.412295 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:09:35.412299 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:09:35.412303 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:09:35.412307 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:09:35.412311 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:09:35.412314 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:09:35.412318 | orchestrator | 2026-03-25 04:09:35.412409 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-25 04:09:35.412428 | orchestrator | 2026-03-25 04:09:35.412432 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-25 04:09:35.412436 | orchestrator | Wednesday 25 March 2026 04:09:28 +0000 (0:00:01.601) 0:00:03.161 ******* 2026-03-25 04:09:35.412440 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:09:35.412444 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:09:35.412447 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:09:35.412451 | orchestrator | ok: [testbed-manager] 2026-03-25 04:09:35.412455 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:09:35.412459 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:09:35.412462 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:09:35.412475 | orchestrator | 2026-03-25 04:09:35.412479 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-25 04:09:35.412487 | orchestrator | 2026-03-25 04:09:35.412491 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-25 04:09:35.412495 | orchestrator | Wednesday 25 March 2026 04:09:34 +0000 (0:00:05.689) 0:00:08.850 ******* 2026-03-25 04:09:35.412499 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:09:35.412503 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:09:35.412507 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:09:35.412510 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:09:35.412514 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:09:35.412518 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:09:35.412521 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:09:35.412525 | orchestrator | 2026-03-25 04:09:35.412529 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:09:35.412533 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:09:35.412548 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:09:35.412552 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:09:35.412556 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:09:35.412560 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:09:35.412564 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:09:35.412567 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:09:35.412571 | orchestrator | 2026-03-25 04:09:35.412575 | orchestrator | 2026-03-25 04:09:35.412579 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:09:35.412583 | orchestrator | Wednesday 25 March 2026 04:09:34 +0000 (0:00:00.628) 0:00:09.479 ******* 2026-03-25 04:09:35.412593 | orchestrator | =============================================================================== 2026-03-25 04:09:35.412596 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.69s 2026-03-25 04:09:35.412600 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.60s 2026-03-25 04:09:35.412604 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.20s 2026-03-25 04:09:35.412608 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-03-25 04:09:35.813546 | orchestrator | + osism validate ceph-mons 2026-03-25 04:10:10.945943 | orchestrator | 2026-03-25 04:10:10.946107 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-25 04:10:10.946127 | orchestrator | 2026-03-25 04:10:10.946138 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-25 04:10:10.946150 | orchestrator | Wednesday 25 March 2026 04:09:54 +0000 (0:00:00.515) 0:00:00.515 ******* 2026-03-25 04:10:10.946161 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:10.946171 | orchestrator | 2026-03-25 04:10:10.946181 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-25 04:10:10.946191 | orchestrator | Wednesday 25 March 2026 04:09:54 +0000 (0:00:00.934) 0:00:01.450 ******* 2026-03-25 04:10:10.946204 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:10.946221 | orchestrator | 2026-03-25 04:10:10.946245 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-25 04:10:10.946262 | orchestrator | Wednesday 25 March 2026 04:09:56 +0000 (0:00:01.235) 0:00:02.685 ******* 2026-03-25 04:10:10.946278 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.946319 | orchestrator | 2026-03-25 04:10:10.946335 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-25 04:10:10.946351 | orchestrator | Wednesday 25 March 2026 04:09:56 +0000 (0:00:00.137) 0:00:02.823 ******* 2026-03-25 04:10:10.946367 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.946382 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:10.946398 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:10.946412 | orchestrator | 2026-03-25 04:10:10.946425 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-25 04:10:10.946441 | orchestrator | Wednesday 25 March 2026 04:09:56 +0000 (0:00:00.361) 0:00:03.184 ******* 2026-03-25 04:10:10.946457 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:10.946473 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:10.946488 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.946505 | orchestrator | 2026-03-25 04:10:10.946522 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-25 04:10:10.946538 | orchestrator | Wednesday 25 March 2026 04:09:57 +0000 (0:00:01.021) 0:00:04.206 ******* 2026-03-25 04:10:10.946555 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.946574 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:10:10.946594 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:10:10.946613 | orchestrator | 2026-03-25 04:10:10.946630 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-25 04:10:10.946646 | orchestrator | Wednesday 25 March 2026 04:09:58 +0000 (0:00:00.333) 0:00:04.540 ******* 2026-03-25 04:10:10.946661 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.946677 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:10.946693 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:10.946708 | orchestrator | 2026-03-25 04:10:10.946724 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-25 04:10:10.946740 | orchestrator | Wednesday 25 March 2026 04:09:58 +0000 (0:00:00.588) 0:00:05.128 ******* 2026-03-25 04:10:10.946756 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.946772 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:10.946787 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:10.946803 | orchestrator | 2026-03-25 04:10:10.946819 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-25 04:10:10.946869 | orchestrator | Wednesday 25 March 2026 04:09:58 +0000 (0:00:00.358) 0:00:05.486 ******* 2026-03-25 04:10:10.946886 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.946901 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:10:10.946917 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:10:10.946934 | orchestrator | 2026-03-25 04:10:10.946951 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-25 04:10:10.946968 | orchestrator | Wednesday 25 March 2026 04:09:59 +0000 (0:00:00.394) 0:00:05.881 ******* 2026-03-25 04:10:10.946985 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.947001 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:10.947022 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:10.947044 | orchestrator | 2026-03-25 04:10:10.947064 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-25 04:10:10.947084 | orchestrator | Wednesday 25 March 2026 04:09:59 +0000 (0:00:00.552) 0:00:06.434 ******* 2026-03-25 04:10:10.947105 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.947125 | orchestrator | 2026-03-25 04:10:10.947145 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-25 04:10:10.947167 | orchestrator | Wednesday 25 March 2026 04:10:00 +0000 (0:00:00.303) 0:00:06.737 ******* 2026-03-25 04:10:10.947182 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.947198 | orchestrator | 2026-03-25 04:10:10.947213 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-25 04:10:10.947230 | orchestrator | Wednesday 25 March 2026 04:10:00 +0000 (0:00:00.296) 0:00:07.034 ******* 2026-03-25 04:10:10.947245 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.947261 | orchestrator | 2026-03-25 04:10:10.947277 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:10.947386 | orchestrator | Wednesday 25 March 2026 04:10:00 +0000 (0:00:00.262) 0:00:07.297 ******* 2026-03-25 04:10:10.947409 | orchestrator | 2026-03-25 04:10:10.947425 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:10.947440 | orchestrator | Wednesday 25 March 2026 04:10:00 +0000 (0:00:00.074) 0:00:07.371 ******* 2026-03-25 04:10:10.947456 | orchestrator | 2026-03-25 04:10:10.947472 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:10.947486 | orchestrator | Wednesday 25 March 2026 04:10:00 +0000 (0:00:00.081) 0:00:07.452 ******* 2026-03-25 04:10:10.947500 | orchestrator | 2026-03-25 04:10:10.947515 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-25 04:10:10.947530 | orchestrator | Wednesday 25 March 2026 04:10:01 +0000 (0:00:00.089) 0:00:07.542 ******* 2026-03-25 04:10:10.947547 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.947563 | orchestrator | 2026-03-25 04:10:10.947578 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-25 04:10:10.947594 | orchestrator | Wednesday 25 March 2026 04:10:01 +0000 (0:00:00.276) 0:00:07.818 ******* 2026-03-25 04:10:10.947610 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.947626 | orchestrator | 2026-03-25 04:10:10.947670 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-25 04:10:10.947687 | orchestrator | Wednesday 25 March 2026 04:10:01 +0000 (0:00:00.254) 0:00:08.073 ******* 2026-03-25 04:10:10.947704 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.947721 | orchestrator | 2026-03-25 04:10:10.947738 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-25 04:10:10.947756 | orchestrator | Wednesday 25 March 2026 04:10:01 +0000 (0:00:00.136) 0:00:08.209 ******* 2026-03-25 04:10:10.947772 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:10:10.947792 | orchestrator | 2026-03-25 04:10:10.947807 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-25 04:10:10.947824 | orchestrator | Wednesday 25 March 2026 04:10:03 +0000 (0:00:01.553) 0:00:09.763 ******* 2026-03-25 04:10:10.947840 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.947856 | orchestrator | 2026-03-25 04:10:10.947872 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-25 04:10:10.947908 | orchestrator | Wednesday 25 March 2026 04:10:03 +0000 (0:00:00.590) 0:00:10.353 ******* 2026-03-25 04:10:10.947946 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.948057 | orchestrator | 2026-03-25 04:10:10.948075 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-25 04:10:10.948091 | orchestrator | Wednesday 25 March 2026 04:10:04 +0000 (0:00:00.156) 0:00:10.509 ******* 2026-03-25 04:10:10.948107 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.948122 | orchestrator | 2026-03-25 04:10:10.948139 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-25 04:10:10.948154 | orchestrator | Wednesday 25 March 2026 04:10:04 +0000 (0:00:00.355) 0:00:10.865 ******* 2026-03-25 04:10:10.948170 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.948185 | orchestrator | 2026-03-25 04:10:10.948201 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-25 04:10:10.948217 | orchestrator | Wednesday 25 March 2026 04:10:04 +0000 (0:00:00.345) 0:00:11.211 ******* 2026-03-25 04:10:10.948232 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.948248 | orchestrator | 2026-03-25 04:10:10.948263 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-25 04:10:10.948278 | orchestrator | Wednesday 25 March 2026 04:10:04 +0000 (0:00:00.134) 0:00:11.345 ******* 2026-03-25 04:10:10.948356 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.948375 | orchestrator | 2026-03-25 04:10:10.948391 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-25 04:10:10.948407 | orchestrator | Wednesday 25 March 2026 04:10:04 +0000 (0:00:00.142) 0:00:11.488 ******* 2026-03-25 04:10:10.948423 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.948440 | orchestrator | 2026-03-25 04:10:10.948455 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-25 04:10:10.948472 | orchestrator | Wednesday 25 March 2026 04:10:05 +0000 (0:00:00.139) 0:00:11.627 ******* 2026-03-25 04:10:10.948488 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:10:10.948505 | orchestrator | 2026-03-25 04:10:10.948521 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-25 04:10:10.948537 | orchestrator | Wednesday 25 March 2026 04:10:06 +0000 (0:00:01.204) 0:00:12.831 ******* 2026-03-25 04:10:10.948552 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.948568 | orchestrator | 2026-03-25 04:10:10.948585 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-25 04:10:10.948601 | orchestrator | Wednesday 25 March 2026 04:10:06 +0000 (0:00:00.338) 0:00:13.170 ******* 2026-03-25 04:10:10.948618 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.948634 | orchestrator | 2026-03-25 04:10:10.948650 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-25 04:10:10.948665 | orchestrator | Wednesday 25 March 2026 04:10:06 +0000 (0:00:00.133) 0:00:13.303 ******* 2026-03-25 04:10:10.948678 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:10.948693 | orchestrator | 2026-03-25 04:10:10.948709 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-25 04:10:10.948726 | orchestrator | Wednesday 25 March 2026 04:10:06 +0000 (0:00:00.144) 0:00:13.448 ******* 2026-03-25 04:10:10.948755 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.948773 | orchestrator | 2026-03-25 04:10:10.948789 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-25 04:10:10.948804 | orchestrator | Wednesday 25 March 2026 04:10:07 +0000 (0:00:00.189) 0:00:13.637 ******* 2026-03-25 04:10:10.948819 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.948835 | orchestrator | 2026-03-25 04:10:10.948915 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-25 04:10:10.948935 | orchestrator | Wednesday 25 March 2026 04:10:07 +0000 (0:00:00.359) 0:00:13.996 ******* 2026-03-25 04:10:10.948952 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:10.948971 | orchestrator | 2026-03-25 04:10:10.949000 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-25 04:10:10.949010 | orchestrator | Wednesday 25 March 2026 04:10:07 +0000 (0:00:00.274) 0:00:14.270 ******* 2026-03-25 04:10:10.949024 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:10.949041 | orchestrator | 2026-03-25 04:10:10.949056 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-25 04:10:10.949071 | orchestrator | Wednesday 25 March 2026 04:10:08 +0000 (0:00:00.285) 0:00:14.556 ******* 2026-03-25 04:10:10.949085 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:10.949099 | orchestrator | 2026-03-25 04:10:10.949115 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-25 04:10:10.949131 | orchestrator | Wednesday 25 March 2026 04:10:10 +0000 (0:00:02.041) 0:00:16.597 ******* 2026-03-25 04:10:10.949145 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:10.949161 | orchestrator | 2026-03-25 04:10:10.949175 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-25 04:10:10.949190 | orchestrator | Wednesday 25 March 2026 04:10:10 +0000 (0:00:00.327) 0:00:16.925 ******* 2026-03-25 04:10:10.949205 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:10.949220 | orchestrator | 2026-03-25 04:10:10.949259 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:14.003063 | orchestrator | Wednesday 25 March 2026 04:10:10 +0000 (0:00:00.264) 0:00:17.190 ******* 2026-03-25 04:10:14.003140 | orchestrator | 2026-03-25 04:10:14.003146 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:14.003151 | orchestrator | Wednesday 25 March 2026 04:10:10 +0000 (0:00:00.079) 0:00:17.269 ******* 2026-03-25 04:10:14.003155 | orchestrator | 2026-03-25 04:10:14.003160 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:14.003164 | orchestrator | Wednesday 25 March 2026 04:10:10 +0000 (0:00:00.073) 0:00:17.343 ******* 2026-03-25 04:10:14.003168 | orchestrator | 2026-03-25 04:10:14.003172 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-25 04:10:14.003177 | orchestrator | Wednesday 25 March 2026 04:10:10 +0000 (0:00:00.081) 0:00:17.424 ******* 2026-03-25 04:10:14.003181 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:14.003185 | orchestrator | 2026-03-25 04:10:14.003189 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-25 04:10:14.003192 | orchestrator | Wednesday 25 March 2026 04:10:12 +0000 (0:00:01.643) 0:00:19.068 ******* 2026-03-25 04:10:14.003196 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-25 04:10:14.003200 | orchestrator |  "msg": [ 2026-03-25 04:10:14.003205 | orchestrator |  "Validator run completed.", 2026-03-25 04:10:14.003210 | orchestrator |  "You can find the report file here:", 2026-03-25 04:10:14.003214 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-25T04:09:54+00:00-report.json", 2026-03-25 04:10:14.003218 | orchestrator |  "on the following host:", 2026-03-25 04:10:14.003222 | orchestrator |  "testbed-manager" 2026-03-25 04:10:14.003227 | orchestrator |  ] 2026-03-25 04:10:14.003230 | orchestrator | } 2026-03-25 04:10:14.003235 | orchestrator | 2026-03-25 04:10:14.003238 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:10:14.003244 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-25 04:10:14.003250 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:10:14.003254 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:10:14.003258 | orchestrator | 2026-03-25 04:10:14.003261 | orchestrator | 2026-03-25 04:10:14.003285 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:10:14.003394 | orchestrator | Wednesday 25 March 2026 04:10:13 +0000 (0:00:01.004) 0:00:20.072 ******* 2026-03-25 04:10:14.003400 | orchestrator | =============================================================================== 2026-03-25 04:10:14.003407 | orchestrator | Aggregate test results step one ----------------------------------------- 2.04s 2026-03-25 04:10:14.003413 | orchestrator | Write report file ------------------------------------------------------- 1.64s 2026-03-25 04:10:14.003419 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.55s 2026-03-25 04:10:14.003425 | orchestrator | Create report output directory ------------------------------------------ 1.24s 2026-03-25 04:10:14.003430 | orchestrator | Gather status data ------------------------------------------------------ 1.20s 2026-03-25 04:10:14.003437 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2026-03-25 04:10:14.003444 | orchestrator | Print report file information ------------------------------------------- 1.00s 2026-03-25 04:10:14.003450 | orchestrator | Get timestamp for report file ------------------------------------------- 0.93s 2026-03-25 04:10:14.003466 | orchestrator | Set quorum test data ---------------------------------------------------- 0.59s 2026-03-25 04:10:14.003470 | orchestrator | Set test result to passed if container is existing ---------------------- 0.59s 2026-03-25 04:10:14.003474 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.55s 2026-03-25 04:10:14.003477 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.39s 2026-03-25 04:10:14.003481 | orchestrator | Prepare test data for container existance test -------------------------- 0.36s 2026-03-25 04:10:14.003485 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.36s 2026-03-25 04:10:14.003489 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2026-03-25 04:10:14.003492 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.36s 2026-03-25 04:10:14.003496 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.35s 2026-03-25 04:10:14.003500 | orchestrator | Set health test data ---------------------------------------------------- 0.34s 2026-03-25 04:10:14.003503 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-03-25 04:10:14.003507 | orchestrator | Aggregate test results step two ----------------------------------------- 0.33s 2026-03-25 04:10:14.469512 | orchestrator | + osism validate ceph-mgrs 2026-03-25 04:10:48.957316 | orchestrator | 2026-03-25 04:10:48.957421 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-25 04:10:48.957433 | orchestrator | 2026-03-25 04:10:48.957441 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-25 04:10:48.957448 | orchestrator | Wednesday 25 March 2026 04:10:32 +0000 (0:00:00.603) 0:00:00.603 ******* 2026-03-25 04:10:48.957456 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:48.957464 | orchestrator | 2026-03-25 04:10:48.957472 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-25 04:10:48.957480 | orchestrator | Wednesday 25 March 2026 04:10:33 +0000 (0:00:00.960) 0:00:01.563 ******* 2026-03-25 04:10:48.957488 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:48.957496 | orchestrator | 2026-03-25 04:10:48.957504 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-25 04:10:48.957511 | orchestrator | Wednesday 25 March 2026 04:10:34 +0000 (0:00:01.119) 0:00:02.683 ******* 2026-03-25 04:10:48.957519 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.957527 | orchestrator | 2026-03-25 04:10:48.957535 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-25 04:10:48.957543 | orchestrator | Wednesday 25 March 2026 04:10:34 +0000 (0:00:00.132) 0:00:02.815 ******* 2026-03-25 04:10:48.957550 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.957558 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:48.957566 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:48.957594 | orchestrator | 2026-03-25 04:10:48.957602 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-25 04:10:48.957610 | orchestrator | Wednesday 25 March 2026 04:10:35 +0000 (0:00:00.335) 0:00:03.151 ******* 2026-03-25 04:10:48.957617 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:48.957625 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.957633 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:48.957640 | orchestrator | 2026-03-25 04:10:48.957648 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-25 04:10:48.957656 | orchestrator | Wednesday 25 March 2026 04:10:36 +0000 (0:00:01.092) 0:00:04.243 ******* 2026-03-25 04:10:48.957664 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:48.957671 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:10:48.957679 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:10:48.957685 | orchestrator | 2026-03-25 04:10:48.957692 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-25 04:10:48.957699 | orchestrator | Wednesday 25 March 2026 04:10:36 +0000 (0:00:00.418) 0:00:04.662 ******* 2026-03-25 04:10:48.957707 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.957714 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:48.957721 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:48.957727 | orchestrator | 2026-03-25 04:10:48.957734 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-25 04:10:48.957741 | orchestrator | Wednesday 25 March 2026 04:10:37 +0000 (0:00:00.640) 0:00:05.302 ******* 2026-03-25 04:10:48.957748 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.957754 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:48.957761 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:48.957768 | orchestrator | 2026-03-25 04:10:48.957775 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-25 04:10:48.957781 | orchestrator | Wednesday 25 March 2026 04:10:37 +0000 (0:00:00.352) 0:00:05.655 ******* 2026-03-25 04:10:48.957788 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:48.957795 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:10:48.957801 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:10:48.957808 | orchestrator | 2026-03-25 04:10:48.957814 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-25 04:10:48.957821 | orchestrator | Wednesday 25 March 2026 04:10:37 +0000 (0:00:00.307) 0:00:05.962 ******* 2026-03-25 04:10:48.957828 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.957835 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:10:48.957843 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:10:48.957850 | orchestrator | 2026-03-25 04:10:48.957857 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-25 04:10:48.957864 | orchestrator | Wednesday 25 March 2026 04:10:38 +0000 (0:00:00.552) 0:00:06.515 ******* 2026-03-25 04:10:48.957872 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:48.957879 | orchestrator | 2026-03-25 04:10:48.957887 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-25 04:10:48.957894 | orchestrator | Wednesday 25 March 2026 04:10:38 +0000 (0:00:00.274) 0:00:06.789 ******* 2026-03-25 04:10:48.957901 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:48.957907 | orchestrator | 2026-03-25 04:10:48.957913 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-25 04:10:48.957920 | orchestrator | Wednesday 25 March 2026 04:10:39 +0000 (0:00:00.280) 0:00:07.069 ******* 2026-03-25 04:10:48.957927 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:48.957934 | orchestrator | 2026-03-25 04:10:48.957942 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:48.957949 | orchestrator | Wednesday 25 March 2026 04:10:39 +0000 (0:00:00.275) 0:00:07.345 ******* 2026-03-25 04:10:48.957956 | orchestrator | 2026-03-25 04:10:48.957962 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:48.957969 | orchestrator | Wednesday 25 March 2026 04:10:39 +0000 (0:00:00.077) 0:00:07.423 ******* 2026-03-25 04:10:48.957986 | orchestrator | 2026-03-25 04:10:48.957994 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:48.958001 | orchestrator | Wednesday 25 March 2026 04:10:39 +0000 (0:00:00.084) 0:00:07.507 ******* 2026-03-25 04:10:48.958008 | orchestrator | 2026-03-25 04:10:48.958065 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-25 04:10:48.958071 | orchestrator | Wednesday 25 March 2026 04:10:39 +0000 (0:00:00.080) 0:00:07.588 ******* 2026-03-25 04:10:48.958076 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:48.958080 | orchestrator | 2026-03-25 04:10:48.958085 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-25 04:10:48.958089 | orchestrator | Wednesday 25 March 2026 04:10:39 +0000 (0:00:00.281) 0:00:07.869 ******* 2026-03-25 04:10:48.958093 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:48.958098 | orchestrator | 2026-03-25 04:10:48.958119 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-25 04:10:48.958124 | orchestrator | Wednesday 25 March 2026 04:10:40 +0000 (0:00:00.269) 0:00:08.139 ******* 2026-03-25 04:10:48.958128 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.958133 | orchestrator | 2026-03-25 04:10:48.958138 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-25 04:10:48.958142 | orchestrator | Wednesday 25 March 2026 04:10:40 +0000 (0:00:00.127) 0:00:08.266 ******* 2026-03-25 04:10:48.958146 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:10:48.958150 | orchestrator | 2026-03-25 04:10:48.958155 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-25 04:10:48.958159 | orchestrator | Wednesday 25 March 2026 04:10:42 +0000 (0:00:02.243) 0:00:10.510 ******* 2026-03-25 04:10:48.958163 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.958167 | orchestrator | 2026-03-25 04:10:48.958172 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-25 04:10:48.958176 | orchestrator | Wednesday 25 March 2026 04:10:42 +0000 (0:00:00.491) 0:00:11.002 ******* 2026-03-25 04:10:48.958181 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.958185 | orchestrator | 2026-03-25 04:10:48.958189 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-25 04:10:48.958193 | orchestrator | Wednesday 25 March 2026 04:10:43 +0000 (0:00:00.393) 0:00:11.395 ******* 2026-03-25 04:10:48.958197 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:48.958201 | orchestrator | 2026-03-25 04:10:48.958204 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-25 04:10:48.958208 | orchestrator | Wednesday 25 March 2026 04:10:43 +0000 (0:00:00.204) 0:00:11.599 ******* 2026-03-25 04:10:48.958212 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:10:48.958216 | orchestrator | 2026-03-25 04:10:48.958219 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-25 04:10:48.958223 | orchestrator | Wednesday 25 March 2026 04:10:43 +0000 (0:00:00.161) 0:00:11.761 ******* 2026-03-25 04:10:48.958227 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:48.958230 | orchestrator | 2026-03-25 04:10:48.958234 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-25 04:10:48.958238 | orchestrator | Wednesday 25 March 2026 04:10:43 +0000 (0:00:00.289) 0:00:12.051 ******* 2026-03-25 04:10:48.958241 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:10:48.958245 | orchestrator | 2026-03-25 04:10:48.958348 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-25 04:10:48.958357 | orchestrator | Wednesday 25 March 2026 04:10:44 +0000 (0:00:00.303) 0:00:12.355 ******* 2026-03-25 04:10:48.958361 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:48.958365 | orchestrator | 2026-03-25 04:10:48.958369 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-25 04:10:48.958373 | orchestrator | Wednesday 25 March 2026 04:10:45 +0000 (0:00:01.538) 0:00:13.893 ******* 2026-03-25 04:10:48.958376 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:48.958386 | orchestrator | 2026-03-25 04:10:48.958390 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-25 04:10:48.958393 | orchestrator | Wednesday 25 March 2026 04:10:46 +0000 (0:00:00.288) 0:00:14.181 ******* 2026-03-25 04:10:48.958397 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:48.958401 | orchestrator | 2026-03-25 04:10:48.958405 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:48.958409 | orchestrator | Wednesday 25 March 2026 04:10:46 +0000 (0:00:00.306) 0:00:14.488 ******* 2026-03-25 04:10:48.958412 | orchestrator | 2026-03-25 04:10:48.958416 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:48.958420 | orchestrator | Wednesday 25 March 2026 04:10:46 +0000 (0:00:00.106) 0:00:14.595 ******* 2026-03-25 04:10:48.958424 | orchestrator | 2026-03-25 04:10:48.958427 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:10:48.958431 | orchestrator | Wednesday 25 March 2026 04:10:46 +0000 (0:00:00.097) 0:00:14.692 ******* 2026-03-25 04:10:48.958435 | orchestrator | 2026-03-25 04:10:48.958438 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-25 04:10:48.958442 | orchestrator | Wednesday 25 March 2026 04:10:46 +0000 (0:00:00.326) 0:00:15.019 ******* 2026-03-25 04:10:48.958446 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-25 04:10:48.958449 | orchestrator | 2026-03-25 04:10:48.958453 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-25 04:10:48.958460 | orchestrator | Wednesday 25 March 2026 04:10:48 +0000 (0:00:01.491) 0:00:16.511 ******* 2026-03-25 04:10:48.958464 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-25 04:10:48.958467 | orchestrator |  "msg": [ 2026-03-25 04:10:48.958472 | orchestrator |  "Validator run completed.", 2026-03-25 04:10:48.958476 | orchestrator |  "You can find the report file here:", 2026-03-25 04:10:48.958480 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-25T04:10:33+00:00-report.json", 2026-03-25 04:10:48.958485 | orchestrator |  "on the following host:", 2026-03-25 04:10:48.958488 | orchestrator |  "testbed-manager" 2026-03-25 04:10:48.958492 | orchestrator |  ] 2026-03-25 04:10:48.958496 | orchestrator | } 2026-03-25 04:10:48.958500 | orchestrator | 2026-03-25 04:10:48.958504 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:10:48.958509 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-25 04:10:48.958514 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:10:48.958523 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:10:49.408560 | orchestrator | 2026-03-25 04:10:49.408647 | orchestrator | 2026-03-25 04:10:49.408657 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:10:49.408665 | orchestrator | Wednesday 25 March 2026 04:10:48 +0000 (0:00:00.481) 0:00:16.992 ******* 2026-03-25 04:10:49.408672 | orchestrator | =============================================================================== 2026-03-25 04:10:49.408678 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.24s 2026-03-25 04:10:49.408685 | orchestrator | Aggregate test results step one ----------------------------------------- 1.54s 2026-03-25 04:10:49.408691 | orchestrator | Write report file ------------------------------------------------------- 1.49s 2026-03-25 04:10:49.408697 | orchestrator | Create report output directory ------------------------------------------ 1.12s 2026-03-25 04:10:49.408704 | orchestrator | Get container info ------------------------------------------------------ 1.09s 2026-03-25 04:10:49.408710 | orchestrator | Get timestamp for report file ------------------------------------------- 0.96s 2026-03-25 04:10:49.408736 | orchestrator | Set test result to passed if container is existing ---------------------- 0.64s 2026-03-25 04:10:49.408742 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.55s 2026-03-25 04:10:49.408749 | orchestrator | Flush handlers ---------------------------------------------------------- 0.53s 2026-03-25 04:10:49.408755 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.49s 2026-03-25 04:10:49.408761 | orchestrator | Print report file information ------------------------------------------- 0.48s 2026-03-25 04:10:49.408767 | orchestrator | Set test result to failed if container is missing ----------------------- 0.42s 2026-03-25 04:10:49.408773 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.39s 2026-03-25 04:10:49.408779 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2026-03-25 04:10:49.408785 | orchestrator | Prepare test data for container existance test -------------------------- 0.34s 2026-03-25 04:10:49.408791 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-03-25 04:10:49.408798 | orchestrator | Aggregate test results step three --------------------------------------- 0.31s 2026-03-25 04:10:49.408804 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.30s 2026-03-25 04:10:49.408810 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-03-25 04:10:49.408816 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-03-25 04:10:49.829116 | orchestrator | + osism validate ceph-osds 2026-03-25 04:11:13.031487 | orchestrator | 2026-03-25 04:11:13.031579 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-25 04:11:13.031591 | orchestrator | 2026-03-25 04:11:13.031598 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-25 04:11:13.031606 | orchestrator | Wednesday 25 March 2026 04:11:07 +0000 (0:00:00.496) 0:00:00.496 ******* 2026-03-25 04:11:13.031614 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 04:11:13.031621 | orchestrator | 2026-03-25 04:11:13.031628 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-25 04:11:13.031635 | orchestrator | Wednesday 25 March 2026 04:11:08 +0000 (0:00:00.977) 0:00:01.474 ******* 2026-03-25 04:11:13.031641 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 04:11:13.031648 | orchestrator | 2026-03-25 04:11:13.031655 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-25 04:11:13.031661 | orchestrator | Wednesday 25 March 2026 04:11:09 +0000 (0:00:00.602) 0:00:02.076 ******* 2026-03-25 04:11:13.031668 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 04:11:13.031674 | orchestrator | 2026-03-25 04:11:13.031681 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-25 04:11:13.031687 | orchestrator | Wednesday 25 March 2026 04:11:10 +0000 (0:00:00.816) 0:00:02.893 ******* 2026-03-25 04:11:13.031694 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:13.031703 | orchestrator | 2026-03-25 04:11:13.031710 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-25 04:11:13.031716 | orchestrator | Wednesday 25 March 2026 04:11:10 +0000 (0:00:00.162) 0:00:03.056 ******* 2026-03-25 04:11:13.031723 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:13.031730 | orchestrator | 2026-03-25 04:11:13.031749 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-25 04:11:13.031756 | orchestrator | Wednesday 25 March 2026 04:11:10 +0000 (0:00:00.151) 0:00:03.207 ******* 2026-03-25 04:11:13.031762 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:13.031769 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:11:13.031776 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:11:13.031782 | orchestrator | 2026-03-25 04:11:13.031789 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-25 04:11:13.031796 | orchestrator | Wednesday 25 March 2026 04:11:10 +0000 (0:00:00.357) 0:00:03.564 ******* 2026-03-25 04:11:13.031822 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:13.031829 | orchestrator | 2026-03-25 04:11:13.031835 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-25 04:11:13.031842 | orchestrator | Wednesday 25 March 2026 04:11:11 +0000 (0:00:00.158) 0:00:03.723 ******* 2026-03-25 04:11:13.031848 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:13.031869 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:13.031883 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:13.031890 | orchestrator | 2026-03-25 04:11:13.031897 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-25 04:11:13.031903 | orchestrator | Wednesday 25 March 2026 04:11:11 +0000 (0:00:00.360) 0:00:04.084 ******* 2026-03-25 04:11:13.031910 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:13.031916 | orchestrator | 2026-03-25 04:11:13.031923 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-25 04:11:13.031930 | orchestrator | Wednesday 25 March 2026 04:11:12 +0000 (0:00:00.942) 0:00:05.027 ******* 2026-03-25 04:11:13.031936 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:13.031943 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:13.031950 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:13.031956 | orchestrator | 2026-03-25 04:11:13.031963 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-25 04:11:13.031969 | orchestrator | Wednesday 25 March 2026 04:11:12 +0000 (0:00:00.335) 0:00:05.362 ******* 2026-03-25 04:11:13.031978 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5dd306252e18ca85c07ee4b4fc6eb312706ea6a8c80162e59dc80cf8666fa3fe', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-25 04:11:13.031988 | orchestrator | skipping: [testbed-node-3] => (item={'id': '64c07f88e711858e625f513ee3539285431621ee5d4816480ebb0f61aa19eea8', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-25 04:11:13.031996 | orchestrator | skipping: [testbed-node-3] => (item={'id': '54ea93c94c8a8227c1d5841bbc30d13cf0a78975f48fa1bd29a18666708a4e21', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-25 04:11:13.032003 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e8d6ef825d32d4a672bfd8ee63f4fcfbd1ab17c14a1130e97b18519a6083a62c', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-25 04:11:13.032010 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a7d7a9cd07c0c7516e84861732f3b9799dabff74ca525665e5120545ff3f6ba1', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-25 04:11:13.032034 | orchestrator | skipping: [testbed-node-3] => (item={'id': '060591dd7c936bb2dc942c22f6b2501ff5678f018fe8a6a540e0197fb23f9186', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-25 04:11:13.032044 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1ebd2e6a29a763b752d6332badb7a7c80d7571531f7550e44e78e8270b686c5d', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-25 04:11:13.032052 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a489e717b277f99471004f6b9994af3b15bc7394d515026742dd5d2da2c96594', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-25 04:11:13.032060 | orchestrator | skipping: [testbed-node-3] => (item={'id': '898697033c554404d2c1203ecfd3e39e9236540f20d2f3e1241d17f2f2636394', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.032079 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f71051aff7a7117d7959df7285071bc4d7402d9b53bae68980df7d9857329537', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.032088 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5c604221ff6ee63dd6c16301814462ca76f0bd0c039be3410e5bbc3242e9dc6e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.032099 | orchestrator | ok: [testbed-node-3] => (item={'id': 'b8c031e76befe0a75e949a09cf97e1e7e9c165416f1d52802c40731e736bf65e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-25 04:11:13.032107 | orchestrator | ok: [testbed-node-3] => (item={'id': '03dedfaaeab9472e4d689fbca9eff1b7c927e843b31bb2c14b9a31469e6cb409', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-25 04:11:13.032115 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bbc1d91a646c45c3e14bf3d420dfd29761fefbe393b83288bd78d82521ee1b0e', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.032124 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a3f27577af8ade449eb433b596908d8a12db1e38042ddd3a83722f9341c49479', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-25 04:11:13.032132 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd41ce562d1327d0ac9a0abcca6fbf8df0e81ed8b4ac8920d757841d4d7d51798', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-25 04:11:13.032140 | orchestrator | skipping: [testbed-node-3] => (item={'id': '63fdb9e4f5cd342dd9e19894852c0acf1287f09e891c1617c7b06a9f157d5f5d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-25 04:11:13.032148 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1ea0d68d8edd3b38a7a2c9f4819bfa240d556b80230327af8aa7fbb15751d508', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-25 04:11:13.032156 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e76f513cb24f788c29dc0b424a9d9569adf51ee154724747799592ca54afb30a', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-25 04:11:13.032165 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4910ba0ac36e702da6a01ec58e0f431a12ed551074430870a6c783228e59c3a9', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-25 04:11:13.032179 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f0439a52340c62571f7e16da544d0664c273dad3d7b814b23091d1bcf07175f9', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-25 04:11:13.301699 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5c0cb0b3949fd9eee0801590eba3d448dbd88f59769156784afc8e453494b458', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-25 04:11:13.301798 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1b91a6face3fbdaaae5699461327df91f9c3aea23faf509dac474df6719d9418', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-25 04:11:13.301809 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'acb9e4d5c2b974b6a01aff2cd3a713365afab17f316b850c714c468bf7a07fc8', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-25 04:11:13.301818 | orchestrator | skipping: [testbed-node-4] => (item={'id': '01d5b4f4e0d419605ae526f35a35b5935c50f88e60caa130cfecb54e64ab9117', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-25 04:11:13.301825 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bc997ff8b3ecd276f3fd169f371828c21aa5b968e62858555100c1b0d256b638', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-25 04:11:13.301868 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0c43b64814effe5eee0e7689e11c9be1c131c07720c4aa92696fd7aa1d9a5451', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-25 04:11:13.301876 | orchestrator | skipping: [testbed-node-4] => (item={'id': '35f9a027b213d8d27c98fa431d6e462fcd417966fe65c6ac03997b316744c7e0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.301884 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a6d01e52e4f66c38b4741a7c91dc9f7dd3af3ee8f429a4e49a442ac3b76f9b1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.301891 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7ef4aec17bad605e12ac9c66b7a961bcc00b9f7d3f053886240e1c9c5a4f776d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.301899 | orchestrator | ok: [testbed-node-4] => (item={'id': '586aeb002472f1e81eb8d3cebceee20ce105197eb79d202b61df0664695ddec2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-25 04:11:13.301906 | orchestrator | ok: [testbed-node-4] => (item={'id': '787ee6c704e3198822f42bdbe9c8de7034c9aae508343ab46a9705079d4208dd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-25 04:11:13.301912 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'df828d8d5844c346461118781f740980c7f74cbcc43324939558e339c534b912', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.301919 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c6b554982d3dacc186fe25353ae8143030c38bee75152e273131c48859f6d0c2', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-25 04:11:13.301925 | orchestrator | skipping: [testbed-node-4] => (item={'id': '654831843c43037448fb628089585fe6b0e74ca25c78e1810c2b098df39b7663', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-25 04:11:13.301950 | orchestrator | skipping: [testbed-node-4] => (item={'id': '38adf0507ee16aeacf0b362297e09f67f39fe750abb8ce8f7c8cdbaeab13dd28', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-25 04:11:13.301957 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e4d1a7bb45b23a105d14f64c2ddc02e904156d4eeceba44acc3e632b7a35d415', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-25 04:11:13.301963 | orchestrator | skipping: [testbed-node-4] => (item={'id': '05cd06b17df7e753c0f6e8125691f7eaa471a90e8dcd8c94a1449844132b3aea', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-25 04:11:13.301969 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6be535793abfe5a12c8a31293b8e4ae341341d926af2e5cd6a2791925518d35', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-25 04:11:13.301979 | orchestrator | skipping: [testbed-node-5] => (item={'id': '93f0ca5d91d7aa0fe892e5fa0f5dc0e853f3e0ed9e9da5345cea99a7ca2641af', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-25 04:11:13.301986 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e49ed453048de9c6603728a91062b490504d8f112e94d05469a91556c8857989', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-25 04:11:13.301992 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5f457aa55d24cb5f13b4b37c5de6c92f71d8df290cc1c8f3824c368d5ae55eb9', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-03-25 04:11:13.301998 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f3b502fc41004ddbbfd4beabd2ce6f4af68d5f260758f7bfb4e379c522467977', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-25 04:11:13.302005 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4815d9d0400d66ef2647e7d17ea4183a450b987e28c60bd06f44030e6f0817ae', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-25 04:11:13.302011 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5b53e60b1495c8c4908c9983c1b9fca9937b6bd188316fb50ee7ce4b823ed6e4', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-25 04:11:13.302059 | orchestrator | skipping: [testbed-node-5] => (item={'id': '411e8f50fb75fd4ea2873179fdcd332781765aef37e3db7196b676ff2c37761e', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-25 04:11:13.302067 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a7734776fadcf2c5226c3f37865926ade045af6109e947680326f21e5197d7c1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.302073 | orchestrator | skipping: [testbed-node-5] => (item={'id': '32536e3c1e078f50f21c1454334db4ac6cd4bccf36c3d6a1d3ca69d447f5d8b3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.302080 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f33f3e4b6ac249bac1bf91fa32fada2e063c3f97f737662b18bbb4baa1e83091', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:13.302092 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b8fceebc1a86ab1787b1d3f0a538c9ca224183dbe24700b9a23d414ec9b77d75', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-25 04:11:13.302105 | orchestrator | ok: [testbed-node-5] => (item={'id': '6f6e0f656ccd44e37963a36dc4f3e23f6d1cabba8c81ac1e9c3dda556db2ac50', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-25 04:11:25.566614 | orchestrator | skipping: [testbed-node-5] => (item={'id': '09db34f67f2b678e5c5e0a2a6b1aed5900f1c3a3cd7795d788752490f12a0977', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-25 04:11:25.566702 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4d65a91318e8d21714ab108b3ee774161e1ced4cb0baea891ed599940ce68924', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-25 04:11:25.566711 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c526cbdcbba0406c05c8f917dc816465caaf56ff10c571984310d5e5bc7db1d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-25 04:11:25.566728 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4cf20b02e7df6cf8e8323d8471c3ec9e0d2bd7d596ad7bfd198dac068d418855', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-25 04:11:25.566734 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5177e7bd87715546529fe202db5c3681bb926530f1dc3ff31cd6062e3cb80b82', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-25 04:11:25.566738 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f2e09e10cc68ce0623926cdc3b5b5f802ab921dc064c32d300560a672e3a408b', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-25 04:11:25.566742 | orchestrator | 2026-03-25 04:11:25.566748 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-25 04:11:25.566760 | orchestrator | Wednesday 25 March 2026 04:11:13 +0000 (0:00:00.583) 0:00:05.946 ******* 2026-03-25 04:11:25.566764 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.566768 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:25.566772 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:25.566776 | orchestrator | 2026-03-25 04:11:25.566780 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-25 04:11:25.566783 | orchestrator | Wednesday 25 March 2026 04:11:13 +0000 (0:00:00.333) 0:00:06.279 ******* 2026-03-25 04:11:25.566787 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.566795 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:11:25.566801 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:11:25.566808 | orchestrator | 2026-03-25 04:11:25.566815 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-25 04:11:25.566821 | orchestrator | Wednesday 25 March 2026 04:11:14 +0000 (0:00:00.557) 0:00:06.837 ******* 2026-03-25 04:11:25.566829 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.566836 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:25.566842 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:25.566848 | orchestrator | 2026-03-25 04:11:25.566854 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-25 04:11:25.566861 | orchestrator | Wednesday 25 March 2026 04:11:14 +0000 (0:00:00.371) 0:00:07.208 ******* 2026-03-25 04:11:25.566889 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.566895 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:25.566902 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:25.566908 | orchestrator | 2026-03-25 04:11:25.566915 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-25 04:11:25.566922 | orchestrator | Wednesday 25 March 2026 04:11:14 +0000 (0:00:00.344) 0:00:07.553 ******* 2026-03-25 04:11:25.566928 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-25 04:11:25.566936 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-25 04:11:25.566942 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.566949 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-25 04:11:25.566955 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-25 04:11:25.566962 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:11:25.566968 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-25 04:11:25.566975 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-25 04:11:25.566981 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:11:25.566988 | orchestrator | 2026-03-25 04:11:25.566996 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-25 04:11:25.567003 | orchestrator | Wednesday 25 March 2026 04:11:15 +0000 (0:00:00.404) 0:00:07.957 ******* 2026-03-25 04:11:25.567010 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.567018 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:25.567022 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:25.567025 | orchestrator | 2026-03-25 04:11:25.567029 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-25 04:11:25.567033 | orchestrator | Wednesday 25 March 2026 04:11:15 +0000 (0:00:00.575) 0:00:08.533 ******* 2026-03-25 04:11:25.567037 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.567052 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:11:25.567056 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:11:25.567060 | orchestrator | 2026-03-25 04:11:25.567064 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-25 04:11:25.567068 | orchestrator | Wednesday 25 March 2026 04:11:16 +0000 (0:00:00.323) 0:00:08.856 ******* 2026-03-25 04:11:25.567072 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.567076 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:11:25.567080 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:11:25.567083 | orchestrator | 2026-03-25 04:11:25.567087 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-25 04:11:25.567091 | orchestrator | Wednesday 25 March 2026 04:11:16 +0000 (0:00:00.331) 0:00:09.187 ******* 2026-03-25 04:11:25.567094 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.567098 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:25.567102 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:25.567106 | orchestrator | 2026-03-25 04:11:25.567109 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-25 04:11:25.567113 | orchestrator | Wednesday 25 March 2026 04:11:16 +0000 (0:00:00.379) 0:00:09.567 ******* 2026-03-25 04:11:25.567117 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.567121 | orchestrator | 2026-03-25 04:11:25.567124 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-25 04:11:25.567133 | orchestrator | Wednesday 25 March 2026 04:11:17 +0000 (0:00:00.798) 0:00:10.366 ******* 2026-03-25 04:11:25.567139 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.567145 | orchestrator | 2026-03-25 04:11:25.567150 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-25 04:11:25.567155 | orchestrator | Wednesday 25 March 2026 04:11:17 +0000 (0:00:00.255) 0:00:10.621 ******* 2026-03-25 04:11:25.567169 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.567178 | orchestrator | 2026-03-25 04:11:25.567184 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:11:25.567189 | orchestrator | Wednesday 25 March 2026 04:11:18 +0000 (0:00:00.311) 0:00:10.932 ******* 2026-03-25 04:11:25.567196 | orchestrator | 2026-03-25 04:11:25.567202 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:11:25.567208 | orchestrator | Wednesday 25 March 2026 04:11:18 +0000 (0:00:00.087) 0:00:11.020 ******* 2026-03-25 04:11:25.567214 | orchestrator | 2026-03-25 04:11:25.567220 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:11:25.567242 | orchestrator | Wednesday 25 March 2026 04:11:18 +0000 (0:00:00.076) 0:00:11.096 ******* 2026-03-25 04:11:25.567248 | orchestrator | 2026-03-25 04:11:25.567254 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-25 04:11:25.567261 | orchestrator | Wednesday 25 March 2026 04:11:18 +0000 (0:00:00.082) 0:00:11.179 ******* 2026-03-25 04:11:25.567267 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.567274 | orchestrator | 2026-03-25 04:11:25.567279 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-25 04:11:25.567284 | orchestrator | Wednesday 25 March 2026 04:11:18 +0000 (0:00:00.267) 0:00:11.446 ******* 2026-03-25 04:11:25.567290 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.567297 | orchestrator | 2026-03-25 04:11:25.567303 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-25 04:11:25.567308 | orchestrator | Wednesday 25 March 2026 04:11:19 +0000 (0:00:00.317) 0:00:11.764 ******* 2026-03-25 04:11:25.567314 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.567321 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:25.567327 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:25.567332 | orchestrator | 2026-03-25 04:11:25.567339 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-25 04:11:25.567345 | orchestrator | Wednesday 25 March 2026 04:11:19 +0000 (0:00:00.357) 0:00:12.121 ******* 2026-03-25 04:11:25.567351 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.567357 | orchestrator | 2026-03-25 04:11:25.567364 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-25 04:11:25.567370 | orchestrator | Wednesday 25 March 2026 04:11:20 +0000 (0:00:00.791) 0:00:12.913 ******* 2026-03-25 04:11:25.567376 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 04:11:25.567383 | orchestrator | 2026-03-25 04:11:25.567389 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-25 04:11:25.567396 | orchestrator | Wednesday 25 March 2026 04:11:21 +0000 (0:00:01.636) 0:00:14.550 ******* 2026-03-25 04:11:25.567402 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.567409 | orchestrator | 2026-03-25 04:11:25.567415 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-25 04:11:25.567422 | orchestrator | Wednesday 25 March 2026 04:11:22 +0000 (0:00:00.148) 0:00:14.698 ******* 2026-03-25 04:11:25.567429 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.567435 | orchestrator | 2026-03-25 04:11:25.567441 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-25 04:11:25.567447 | orchestrator | Wednesday 25 March 2026 04:11:22 +0000 (0:00:00.376) 0:00:15.075 ******* 2026-03-25 04:11:25.567454 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:25.567460 | orchestrator | 2026-03-25 04:11:25.567466 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-25 04:11:25.567472 | orchestrator | Wednesday 25 March 2026 04:11:22 +0000 (0:00:00.128) 0:00:15.203 ******* 2026-03-25 04:11:25.567479 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.567485 | orchestrator | 2026-03-25 04:11:25.567492 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-25 04:11:25.567499 | orchestrator | Wednesday 25 March 2026 04:11:22 +0000 (0:00:00.156) 0:00:15.359 ******* 2026-03-25 04:11:25.567513 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:25.567520 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:25.567526 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:25.567532 | orchestrator | 2026-03-25 04:11:25.567538 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-25 04:11:25.567544 | orchestrator | Wednesday 25 March 2026 04:11:23 +0000 (0:00:00.337) 0:00:15.697 ******* 2026-03-25 04:11:25.567552 | orchestrator | changed: [testbed-node-3] 2026-03-25 04:11:25.567557 | orchestrator | changed: [testbed-node-4] 2026-03-25 04:11:25.567561 | orchestrator | changed: [testbed-node-5] 2026-03-25 04:11:37.312144 | orchestrator | 2026-03-25 04:11:37.312308 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-25 04:11:37.312328 | orchestrator | Wednesday 25 March 2026 04:11:25 +0000 (0:00:02.508) 0:00:18.206 ******* 2026-03-25 04:11:37.312336 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:37.312343 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:37.312349 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:37.312355 | orchestrator | 2026-03-25 04:11:37.312360 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-25 04:11:37.312366 | orchestrator | Wednesday 25 March 2026 04:11:25 +0000 (0:00:00.361) 0:00:18.567 ******* 2026-03-25 04:11:37.312372 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:37.312377 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:37.312382 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:37.312388 | orchestrator | 2026-03-25 04:11:37.312394 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-25 04:11:37.312399 | orchestrator | Wednesday 25 March 2026 04:11:26 +0000 (0:00:00.555) 0:00:19.123 ******* 2026-03-25 04:11:37.312405 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:37.312424 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:11:37.312429 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:11:37.312435 | orchestrator | 2026-03-25 04:11:37.312440 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-25 04:11:37.312446 | orchestrator | Wednesday 25 March 2026 04:11:26 +0000 (0:00:00.365) 0:00:19.488 ******* 2026-03-25 04:11:37.312451 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:37.312457 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:37.312462 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:37.312467 | orchestrator | 2026-03-25 04:11:37.312473 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-25 04:11:37.312479 | orchestrator | Wednesday 25 March 2026 04:11:27 +0000 (0:00:00.627) 0:00:20.115 ******* 2026-03-25 04:11:37.312484 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:37.312489 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:11:37.312512 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:11:37.312525 | orchestrator | 2026-03-25 04:11:37.312537 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-25 04:11:37.312546 | orchestrator | Wednesday 25 March 2026 04:11:27 +0000 (0:00:00.341) 0:00:20.457 ******* 2026-03-25 04:11:37.312554 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:37.312563 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:11:37.312583 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:11:37.312599 | orchestrator | 2026-03-25 04:11:37.312609 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-25 04:11:37.312618 | orchestrator | Wednesday 25 March 2026 04:11:28 +0000 (0:00:00.357) 0:00:20.814 ******* 2026-03-25 04:11:37.312627 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:37.312636 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:37.312645 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:37.312655 | orchestrator | 2026-03-25 04:11:37.312665 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-25 04:11:37.312674 | orchestrator | Wednesday 25 March 2026 04:11:28 +0000 (0:00:00.569) 0:00:21.383 ******* 2026-03-25 04:11:37.312684 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:37.312693 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:37.312723 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:37.312732 | orchestrator | 2026-03-25 04:11:37.312741 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-25 04:11:37.312749 | orchestrator | Wednesday 25 March 2026 04:11:29 +0000 (0:00:00.958) 0:00:22.342 ******* 2026-03-25 04:11:37.312758 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:37.312767 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:37.312775 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:37.312784 | orchestrator | 2026-03-25 04:11:37.312794 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-25 04:11:37.312803 | orchestrator | Wednesday 25 March 2026 04:11:30 +0000 (0:00:00.352) 0:00:22.695 ******* 2026-03-25 04:11:37.312812 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:37.312822 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:11:37.312830 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:11:37.312838 | orchestrator | 2026-03-25 04:11:37.312847 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-25 04:11:37.312856 | orchestrator | Wednesday 25 March 2026 04:11:30 +0000 (0:00:00.359) 0:00:23.054 ******* 2026-03-25 04:11:37.312866 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:11:37.312876 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:11:37.312885 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:11:37.312894 | orchestrator | 2026-03-25 04:11:37.312903 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-25 04:11:37.312913 | orchestrator | Wednesday 25 March 2026 04:11:30 +0000 (0:00:00.575) 0:00:23.630 ******* 2026-03-25 04:11:37.312922 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 04:11:37.312931 | orchestrator | 2026-03-25 04:11:37.312939 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-25 04:11:37.312947 | orchestrator | Wednesday 25 March 2026 04:11:31 +0000 (0:00:00.307) 0:00:23.937 ******* 2026-03-25 04:11:37.312956 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:11:37.312964 | orchestrator | 2026-03-25 04:11:37.312972 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-25 04:11:37.312981 | orchestrator | Wednesday 25 March 2026 04:11:31 +0000 (0:00:00.285) 0:00:24.223 ******* 2026-03-25 04:11:37.312990 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 04:11:37.313000 | orchestrator | 2026-03-25 04:11:37.313008 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-25 04:11:37.313017 | orchestrator | Wednesday 25 March 2026 04:11:33 +0000 (0:00:02.025) 0:00:26.249 ******* 2026-03-25 04:11:37.313025 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 04:11:37.313033 | orchestrator | 2026-03-25 04:11:37.313042 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-25 04:11:37.313051 | orchestrator | Wednesday 25 March 2026 04:11:33 +0000 (0:00:00.271) 0:00:26.520 ******* 2026-03-25 04:11:37.313059 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 04:11:37.313068 | orchestrator | 2026-03-25 04:11:37.313096 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:11:37.313107 | orchestrator | Wednesday 25 March 2026 04:11:34 +0000 (0:00:00.296) 0:00:26.816 ******* 2026-03-25 04:11:37.313116 | orchestrator | 2026-03-25 04:11:37.313125 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:11:37.313134 | orchestrator | Wednesday 25 March 2026 04:11:34 +0000 (0:00:00.078) 0:00:26.895 ******* 2026-03-25 04:11:37.313143 | orchestrator | 2026-03-25 04:11:37.313152 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-25 04:11:37.313161 | orchestrator | Wednesday 25 March 2026 04:11:34 +0000 (0:00:00.093) 0:00:26.988 ******* 2026-03-25 04:11:37.313169 | orchestrator | 2026-03-25 04:11:37.313179 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-25 04:11:37.313188 | orchestrator | Wednesday 25 March 2026 04:11:34 +0000 (0:00:00.097) 0:00:27.086 ******* 2026-03-25 04:11:37.313209 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-25 04:11:37.313239 | orchestrator | 2026-03-25 04:11:37.313249 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-25 04:11:37.313258 | orchestrator | Wednesday 25 March 2026 04:11:36 +0000 (0:00:01.706) 0:00:28.792 ******* 2026-03-25 04:11:37.313267 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-25 04:11:37.313276 | orchestrator |  "msg": [ 2026-03-25 04:11:37.313295 | orchestrator |  "Validator run completed.", 2026-03-25 04:11:37.313304 | orchestrator |  "You can find the report file here:", 2026-03-25 04:11:37.313314 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-25T04:11:08+00:00-report.json", 2026-03-25 04:11:37.313323 | orchestrator |  "on the following host:", 2026-03-25 04:11:37.313328 | orchestrator |  "testbed-manager" 2026-03-25 04:11:37.313334 | orchestrator |  ] 2026-03-25 04:11:37.313340 | orchestrator | } 2026-03-25 04:11:37.313346 | orchestrator | 2026-03-25 04:11:37.313351 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:11:37.313357 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 04:11:37.313364 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-25 04:11:37.313370 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-25 04:11:37.313375 | orchestrator | 2026-03-25 04:11:37.313381 | orchestrator | 2026-03-25 04:11:37.313386 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:11:37.313391 | orchestrator | Wednesday 25 March 2026 04:11:36 +0000 (0:00:00.729) 0:00:29.522 ******* 2026-03-25 04:11:37.313397 | orchestrator | =============================================================================== 2026-03-25 04:11:37.313402 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.51s 2026-03-25 04:11:37.313407 | orchestrator | Aggregate test results step one ----------------------------------------- 2.03s 2026-03-25 04:11:37.313413 | orchestrator | Write report file ------------------------------------------------------- 1.71s 2026-03-25 04:11:37.313418 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.64s 2026-03-25 04:11:37.313423 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2026-03-25 04:11:37.313428 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.96s 2026-03-25 04:11:37.313434 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.94s 2026-03-25 04:11:37.313439 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2026-03-25 04:11:37.313444 | orchestrator | Aggregate test results step one ----------------------------------------- 0.80s 2026-03-25 04:11:37.313450 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.79s 2026-03-25 04:11:37.313455 | orchestrator | Print report file information ------------------------------------------- 0.73s 2026-03-25 04:11:37.313460 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.63s 2026-03-25 04:11:37.313466 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.60s 2026-03-25 04:11:37.313471 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.58s 2026-03-25 04:11:37.313476 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.58s 2026-03-25 04:11:37.313481 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.58s 2026-03-25 04:11:37.313487 | orchestrator | Prepare test data ------------------------------------------------------- 0.57s 2026-03-25 04:11:37.313492 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.56s 2026-03-25 04:11:37.313503 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.56s 2026-03-25 04:11:37.313508 | orchestrator | Get list of ceph-osd containers that are not running -------------------- 0.40s 2026-03-25 04:11:37.695403 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-25 04:11:37.705811 | orchestrator | + set -e 2026-03-25 04:11:37.705932 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 04:11:37.705942 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 04:11:37.705947 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 04:11:37.705951 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 04:11:37.705955 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 04:11:37.705960 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 04:11:37.705965 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 04:11:37.705969 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 04:11:37.705974 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 04:11:37.705978 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 04:11:37.705982 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 04:11:37.705986 | orchestrator | ++ export ARA=false 2026-03-25 04:11:37.705990 | orchestrator | ++ ARA=false 2026-03-25 04:11:37.705995 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 04:11:37.705999 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 04:11:37.706003 | orchestrator | ++ export TEMPEST=false 2026-03-25 04:11:37.706007 | orchestrator | ++ TEMPEST=false 2026-03-25 04:11:37.706049 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 04:11:37.706057 | orchestrator | ++ IS_ZUUL=true 2026-03-25 04:11:37.706065 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:11:37.706305 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:11:37.706324 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 04:11:37.706331 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 04:11:37.706337 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 04:11:37.706345 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 04:11:37.706352 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 04:11:37.706359 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 04:11:37.706365 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 04:11:37.706372 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 04:11:37.706391 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-25 04:11:37.706397 | orchestrator | + source /etc/os-release 2026-03-25 04:11:37.706404 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-25 04:11:37.706410 | orchestrator | ++ NAME=Ubuntu 2026-03-25 04:11:37.706417 | orchestrator | ++ VERSION_ID=24.04 2026-03-25 04:11:37.706424 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-25 04:11:37.706431 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-25 04:11:37.706438 | orchestrator | ++ ID=ubuntu 2026-03-25 04:11:37.706444 | orchestrator | ++ ID_LIKE=debian 2026-03-25 04:11:37.706451 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-25 04:11:37.706457 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-25 04:11:37.706464 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-25 04:11:37.706470 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-25 04:11:37.706479 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-25 04:11:37.706485 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-25 04:11:37.706491 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-25 04:11:37.706515 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-25 04:11:37.706525 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-25 04:11:37.724567 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-25 04:12:00.442991 | orchestrator | 2026-03-25 04:12:00.443088 | orchestrator | # Status of Elasticsearch 2026-03-25 04:12:00.443098 | orchestrator | 2026-03-25 04:12:00.443102 | orchestrator | + pushd /opt/configuration/contrib 2026-03-25 04:12:00.443108 | orchestrator | + echo 2026-03-25 04:12:00.443113 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-25 04:12:00.443117 | orchestrator | + echo 2026-03-25 04:12:00.443121 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-25 04:12:00.607911 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-25 04:12:00.608018 | orchestrator | 2026-03-25 04:12:00.608026 | orchestrator | # Status of MariaDB 2026-03-25 04:12:00.608032 | orchestrator | 2026-03-25 04:12:00.608036 | orchestrator | + echo 2026-03-25 04:12:00.608040 | orchestrator | + echo '# Status of MariaDB' 2026-03-25 04:12:00.608044 | orchestrator | + echo 2026-03-25 04:12:00.608722 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-25 04:12:00.654924 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-25 04:12:00.655012 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-25 04:12:00.655021 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-25 04:12:00.655030 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-25 04:12:00.706232 | orchestrator | Reading package lists... 2026-03-25 04:12:01.096522 | orchestrator | Building dependency tree... 2026-03-25 04:12:01.097138 | orchestrator | Reading state information... 2026-03-25 04:12:01.672068 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-25 04:12:01.672158 | orchestrator | bc set to manually installed. 2026-03-25 04:12:01.672183 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-25 04:12:02.378733 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-25 04:12:02.379495 | orchestrator | 2026-03-25 04:12:02.379526 | orchestrator | # Status of Prometheus 2026-03-25 04:12:02.379534 | orchestrator | 2026-03-25 04:12:02.379541 | orchestrator | + echo 2026-03-25 04:12:02.379549 | orchestrator | + echo '# Status of Prometheus' 2026-03-25 04:12:02.379556 | orchestrator | + echo 2026-03-25 04:12:02.379562 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-25 04:12:02.445364 | orchestrator | Unauthorized 2026-03-25 04:12:02.448336 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-25 04:12:02.509954 | orchestrator | Unauthorized 2026-03-25 04:12:02.514642 | orchestrator | 2026-03-25 04:12:02.514726 | orchestrator | # Status of RabbitMQ 2026-03-25 04:12:02.514738 | orchestrator | 2026-03-25 04:12:02.514747 | orchestrator | + echo 2026-03-25 04:12:02.514756 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-25 04:12:02.514765 | orchestrator | + echo 2026-03-25 04:12:02.514773 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-25 04:12:02.567323 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-25 04:12:02.567488 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-25 04:12:02.567507 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-25 04:12:03.070368 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-25 04:12:03.081051 | orchestrator | 2026-03-25 04:12:03.081140 | orchestrator | # Status of Redis 2026-03-25 04:12:03.081151 | orchestrator | 2026-03-25 04:12:03.081159 | orchestrator | + echo 2026-03-25 04:12:03.081166 | orchestrator | + echo '# Status of Redis' 2026-03-25 04:12:03.081175 | orchestrator | + echo 2026-03-25 04:12:03.081186 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-25 04:12:03.087393 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002087s;;;0.000000;10.000000 2026-03-25 04:12:03.088428 | orchestrator | 2026-03-25 04:12:03.088471 | orchestrator | # Create backup of MariaDB database 2026-03-25 04:12:03.088481 | orchestrator | 2026-03-25 04:12:03.088487 | orchestrator | + popd 2026-03-25 04:12:03.088494 | orchestrator | + echo 2026-03-25 04:12:03.088501 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-25 04:12:03.088507 | orchestrator | + echo 2026-03-25 04:12:03.088515 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-25 04:12:05.601034 | orchestrator | 2026-03-25 04:12:05 | INFO  | Task 9c121133-17f0-44f3-8bc0-a5947391bd9a (mariadb_backup) was prepared for execution. 2026-03-25 04:12:05.601122 | orchestrator | 2026-03-25 04:12:05 | INFO  | It takes a moment until task 9c121133-17f0-44f3-8bc0-a5947391bd9a (mariadb_backup) has been started and output is visible here. 2026-03-25 04:12:36.805451 | orchestrator | 2026-03-25 04:12:36.805562 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 04:12:36.805577 | orchestrator | 2026-03-25 04:12:36.805584 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 04:12:36.805592 | orchestrator | Wednesday 25 March 2026 04:12:10 +0000 (0:00:00.203) 0:00:00.203 ******* 2026-03-25 04:12:36.805599 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:12:36.805632 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:12:36.805640 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:12:36.805647 | orchestrator | 2026-03-25 04:12:36.805654 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 04:12:36.805662 | orchestrator | Wednesday 25 March 2026 04:12:11 +0000 (0:00:00.379) 0:00:00.582 ******* 2026-03-25 04:12:36.805670 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-25 04:12:36.805678 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-25 04:12:36.805685 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-25 04:12:36.805692 | orchestrator | 2026-03-25 04:12:36.805699 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-25 04:12:36.805703 | orchestrator | 2026-03-25 04:12:36.805708 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-25 04:12:36.805713 | orchestrator | Wednesday 25 March 2026 04:12:11 +0000 (0:00:00.677) 0:00:01.259 ******* 2026-03-25 04:12:36.805717 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 04:12:36.805722 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-25 04:12:36.805727 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-25 04:12:36.805731 | orchestrator | 2026-03-25 04:12:36.805736 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-25 04:12:36.805740 | orchestrator | Wednesday 25 March 2026 04:12:12 +0000 (0:00:00.484) 0:00:01.744 ******* 2026-03-25 04:12:36.805756 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:12:36.805762 | orchestrator | 2026-03-25 04:12:36.805767 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-25 04:12:36.805771 | orchestrator | Wednesday 25 March 2026 04:12:12 +0000 (0:00:00.621) 0:00:02.366 ******* 2026-03-25 04:12:36.805776 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:12:36.805780 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:12:36.805784 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:12:36.805789 | orchestrator | 2026-03-25 04:12:36.805793 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-25 04:12:36.805797 | orchestrator | Wednesday 25 March 2026 04:12:16 +0000 (0:00:03.716) 0:00:06.083 ******* 2026-03-25 04:12:36.805802 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-25 04:12:36.805806 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-25 04:12:36.805812 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-25 04:12:36.805816 | orchestrator | mariadb_bootstrap_restart 2026-03-25 04:12:36.805821 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:12:36.805825 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:12:36.805830 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:12:36.805834 | orchestrator | 2026-03-25 04:12:36.805838 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-25 04:12:36.805843 | orchestrator | skipping: no hosts matched 2026-03-25 04:12:36.805847 | orchestrator | 2026-03-25 04:12:36.805851 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-25 04:12:36.805856 | orchestrator | skipping: no hosts matched 2026-03-25 04:12:36.805860 | orchestrator | 2026-03-25 04:12:36.805864 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-25 04:12:36.805869 | orchestrator | skipping: no hosts matched 2026-03-25 04:12:36.805873 | orchestrator | 2026-03-25 04:12:36.805877 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-25 04:12:36.805882 | orchestrator | 2026-03-25 04:12:36.805886 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-25 04:12:36.805890 | orchestrator | Wednesday 25 March 2026 04:12:35 +0000 (0:00:18.937) 0:00:25.021 ******* 2026-03-25 04:12:36.805895 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:12:36.805905 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:12:36.805909 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:12:36.805914 | orchestrator | 2026-03-25 04:12:36.805918 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-25 04:12:36.805922 | orchestrator | Wednesday 25 March 2026 04:12:35 +0000 (0:00:00.341) 0:00:25.362 ******* 2026-03-25 04:12:36.805927 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:12:36.805931 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:12:36.805935 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:12:36.805940 | orchestrator | 2026-03-25 04:12:36.805944 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:12:36.805950 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:12:36.805955 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 04:12:36.805960 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 04:12:36.805965 | orchestrator | 2026-03-25 04:12:36.805982 | orchestrator | 2026-03-25 04:12:36.805987 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:12:36.805991 | orchestrator | Wednesday 25 March 2026 04:12:36 +0000 (0:00:00.490) 0:00:25.852 ******* 2026-03-25 04:12:36.806002 | orchestrator | =============================================================================== 2026-03-25 04:12:36.806007 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.94s 2026-03-25 04:12:36.806065 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.72s 2026-03-25 04:12:36.806071 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-03-25 04:12:36.806076 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.62s 2026-03-25 04:12:36.806082 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.49s 2026-03-25 04:12:36.806118 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.48s 2026-03-25 04:12:36.806125 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-03-25 04:12:36.806133 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.34s 2026-03-25 04:12:37.261435 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-25 04:12:37.268335 | orchestrator | + set -e 2026-03-25 04:12:37.268405 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 04:12:37.269204 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 04:12:37.269237 | orchestrator | ++ INTERACTIVE=false 2026-03-25 04:12:37.269242 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 04:12:37.269247 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 04:12:37.269251 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-25 04:12:37.270895 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-25 04:12:37.275945 | orchestrator | 2026-03-25 04:12:37.275988 | orchestrator | # OpenStack endpoints 2026-03-25 04:12:37.275994 | orchestrator | 2026-03-25 04:12:37.275999 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 04:12:37.276003 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 04:12:37.276007 | orchestrator | + export OS_CLOUD=admin 2026-03-25 04:12:37.276012 | orchestrator | + OS_CLOUD=admin 2026-03-25 04:12:37.276016 | orchestrator | + echo 2026-03-25 04:12:37.276020 | orchestrator | + echo '# OpenStack endpoints' 2026-03-25 04:12:37.276024 | orchestrator | + echo 2026-03-25 04:12:37.276027 | orchestrator | + openstack endpoint list 2026-03-25 04:12:40.750232 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-25 04:12:40.750349 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-25 04:12:40.750385 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-25 04:12:40.750395 | orchestrator | | 06b0398244b94323b63847d99610106b | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-25 04:12:40.750404 | orchestrator | | 1f40f26094884e53982032af86fcd5c6 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-25 04:12:40.750413 | orchestrator | | 3755e68d76204d10a921c803f2118cbc | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-25 04:12:40.750421 | orchestrator | | 3d321374c140487a8a4f78fac4a11419 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-25 04:12:40.750431 | orchestrator | | 429f5d6d5e6f41c0b3170794e1d11ce2 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-03-25 04:12:40.750469 | orchestrator | | 45709081ac134189a1a12b7d218ea0f0 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-25 04:12:40.750478 | orchestrator | | 46849ecb60ae4cf39c0c74ec2961e415 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-25 04:12:40.750486 | orchestrator | | 5030f6295626426d831c7c7c5cdbf613 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-03-25 04:12:40.750495 | orchestrator | | 531edfcbdf014076b2db1f3f534c31d9 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-25 04:12:40.750504 | orchestrator | | 543316fb822b4dfab2413908bc974353 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-03-25 04:12:40.750539 | orchestrator | | 65db91b96f824fca86d504483fed28a4 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-25 04:12:40.750548 | orchestrator | | 7da75947db8c46c2af3ce67ca163079d | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-25 04:12:40.750557 | orchestrator | | 807b14b89ec8450fa90f62c33c4c8876 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-25 04:12:40.750566 | orchestrator | | 839f5c14c1f44b08979200346de095ed | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-25 04:12:40.750574 | orchestrator | | 87abe729f4824c4e9578487eb280f08a | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-03-25 04:12:40.750583 | orchestrator | | 9523d9b1017643baa9ea2e94f258e4fb | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-25 04:12:40.750591 | orchestrator | | 97ec4cd1bc9c478eb8eba5aabb7fa9fd | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-25 04:12:40.750600 | orchestrator | | a105f47b099946ab946ee8aeacb190e3 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-25 04:12:40.750609 | orchestrator | | a19123409be147c5a8d6c58b29868f11 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-25 04:12:40.750617 | orchestrator | | a9e5a36d6f824ee48f0545d985bef990 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-25 04:12:40.750650 | orchestrator | | ae3a4471eabb42888f5bc053d72a1e5e | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-25 04:12:40.750665 | orchestrator | | bc44149521874b51aeefc9ca23dc6783 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-25 04:12:40.750676 | orchestrator | | c4775a11fd3e443885ea207ad7edfa6e | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-25 04:12:40.750687 | orchestrator | | c87539f924b7497682a1d66583926478 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-25 04:12:40.750697 | orchestrator | | d3ca8786f731401891fc33f6c3f54387 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-03-25 04:12:40.750707 | orchestrator | | d4c6c51ef8b549248bf9054afa371d57 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-25 04:12:40.750717 | orchestrator | | d961b1f7bfd74b6db6172aa4ee74e53e | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-25 04:12:40.750727 | orchestrator | | dfb7a1be6ff442e489eff7c592bbc6df | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-25 04:12:40.750737 | orchestrator | | fb126112239c4a82809066c0002689c5 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-03-25 04:12:40.750746 | orchestrator | | fb8223d9204448f7a5ab95a75c34fe3f | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-25 04:12:40.750756 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-25 04:12:41.083766 | orchestrator | 2026-03-25 04:12:41.083857 | orchestrator | # Cinder 2026-03-25 04:12:41.083867 | orchestrator | 2026-03-25 04:12:41.083872 | orchestrator | + echo 2026-03-25 04:12:41.083878 | orchestrator | + echo '# Cinder' 2026-03-25 04:12:41.083883 | orchestrator | + echo 2026-03-25 04:12:41.083888 | orchestrator | + openstack volume service list 2026-03-25 04:12:44.044571 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-25 04:12:44.044682 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-25 04:12:44.044694 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-25 04:12:44.044700 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-25T04:12:41.000000 | 2026-03-25 04:12:44.044706 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-25T04:12:42.000000 | 2026-03-25 04:12:44.044713 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-25T04:12:41.000000 | 2026-03-25 04:12:44.044719 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-25T04:12:41.000000 | 2026-03-25 04:12:44.044725 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-25T04:12:37.000000 | 2026-03-25 04:12:44.044732 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-25T04:12:39.000000 | 2026-03-25 04:12:44.044739 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-25T04:12:41.000000 | 2026-03-25 04:12:44.044745 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-25T04:12:43.000000 | 2026-03-25 04:12:44.044777 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-25T04:12:43.000000 | 2026-03-25 04:12:44.044784 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-25 04:12:44.399314 | orchestrator | 2026-03-25 04:12:44.399403 | orchestrator | # Neutron 2026-03-25 04:12:44.399415 | orchestrator | 2026-03-25 04:12:44.399420 | orchestrator | + echo 2026-03-25 04:12:44.399424 | orchestrator | + echo '# Neutron' 2026-03-25 04:12:44.399429 | orchestrator | + echo 2026-03-25 04:12:44.399433 | orchestrator | + openstack network agent list 2026-03-25 04:12:47.285258 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-25 04:12:47.286052 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-25 04:12:47.286090 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-25 04:12:47.286098 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-25 04:12:47.286104 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-25 04:12:47.286128 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-25 04:12:47.286134 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-25 04:12:47.286141 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-25 04:12:47.286147 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-25 04:12:47.286154 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-25 04:12:47.286209 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-25 04:12:47.286216 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-25 04:12:47.286222 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-25 04:12:47.628759 | orchestrator | + openstack network service provider list 2026-03-25 04:12:50.886056 | orchestrator | +---------------+------+---------+ 2026-03-25 04:12:50.886194 | orchestrator | | Service Type | Name | Default | 2026-03-25 04:12:50.886202 | orchestrator | +---------------+------+---------+ 2026-03-25 04:12:50.886207 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-25 04:12:50.886211 | orchestrator | +---------------+------+---------+ 2026-03-25 04:12:51.246259 | orchestrator | 2026-03-25 04:12:51.246334 | orchestrator | # Nova 2026-03-25 04:12:51.246342 | orchestrator | 2026-03-25 04:12:51.246348 | orchestrator | + echo 2026-03-25 04:12:51.246354 | orchestrator | + echo '# Nova' 2026-03-25 04:12:51.246360 | orchestrator | + echo 2026-03-25 04:12:51.246365 | orchestrator | + openstack compute service list 2026-03-25 04:12:54.168580 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-25 04:12:54.168671 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-25 04:12:54.168679 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-25 04:12:54.168703 | orchestrator | | d48597e3-f026-4ec9-871a-5083af6d3bb0 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-25T04:12:51.000000 | 2026-03-25 04:12:54.168709 | orchestrator | | 56e3013a-4d98-417f-a181-f3ff43b23d4f | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-25T04:12:44.000000 | 2026-03-25 04:12:54.168713 | orchestrator | | d5b90069-8fe4-479e-9cc6-d98fef6cdf7a | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-25T04:12:46.000000 | 2026-03-25 04:12:54.168718 | orchestrator | | 294054ae-1e65-43c0-8a8f-428f0250a594 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-25T04:12:43.000000 | 2026-03-25 04:12:54.168723 | orchestrator | | 0c4ed97e-fb3a-4ade-9d0e-dbc29f6376ae | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-25T04:12:45.000000 | 2026-03-25 04:12:54.168728 | orchestrator | | 7c8b3b85-01ec-4734-b47c-3f1456725cb3 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-25T04:12:46.000000 | 2026-03-25 04:12:54.168732 | orchestrator | | fdddf76a-a8af-4b13-983d-fbefcf072eb8 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-25T04:12:53.000000 | 2026-03-25 04:12:54.168737 | orchestrator | | 8177567b-eadf-4818-ba32-865d9bd338ee | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-25T04:12:44.000000 | 2026-03-25 04:12:54.168742 | orchestrator | | 5c8966a7-259c-4585-8ad9-8f3ecaac4d65 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-25T04:12:45.000000 | 2026-03-25 04:12:54.168746 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-25 04:12:54.522322 | orchestrator | + openstack hypervisor list 2026-03-25 04:12:57.458322 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-25 04:12:57.458424 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-25 04:12:57.458433 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-25 04:12:57.458440 | orchestrator | | e3d9a8cf-9d12-4312-9f3d-74d0c9fa8109 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-25 04:12:57.458446 | orchestrator | | 7291f68c-9114-4a14-8467-23bb22fff0dd | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-25 04:12:57.458451 | orchestrator | | e15bc988-025e-437d-b3f8-b430148e5ed9 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-25 04:12:57.458457 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-25 04:12:57.846562 | orchestrator | 2026-03-25 04:12:57.846676 | orchestrator | # Run OpenStack test play 2026-03-25 04:12:57.846699 | orchestrator | 2026-03-25 04:12:57.846723 | orchestrator | + echo 2026-03-25 04:12:57.846741 | orchestrator | + echo '# Run OpenStack test play' 2026-03-25 04:12:57.846758 | orchestrator | + echo 2026-03-25 04:12:57.846768 | orchestrator | + osism apply --environment openstack test 2026-03-25 04:13:00.153235 | orchestrator | 2026-03-25 04:13:00 | INFO  | Trying to run play test in environment openstack 2026-03-25 04:13:10.278254 | orchestrator | 2026-03-25 04:13:10 | INFO  | Task 54c4bdbe-bcf1-488d-bbe4-02da32686216 (test) was prepared for execution. 2026-03-25 04:13:10.278338 | orchestrator | 2026-03-25 04:13:10 | INFO  | It takes a moment until task 54c4bdbe-bcf1-488d-bbe4-02da32686216 (test) has been started and output is visible here. 2026-03-25 04:15:59.154488 | orchestrator | 2026-03-25 04:15:59.154601 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-25 04:15:59.154613 | orchestrator | 2026-03-25 04:15:59.154620 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-25 04:15:59.154628 | orchestrator | Wednesday 25 March 2026 04:13:15 +0000 (0:00:00.091) 0:00:00.091 ******* 2026-03-25 04:15:59.154636 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154644 | orchestrator | 2026-03-25 04:15:59.154651 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-25 04:15:59.154680 | orchestrator | Wednesday 25 March 2026 04:13:19 +0000 (0:00:04.133) 0:00:04.225 ******* 2026-03-25 04:15:59.154686 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154693 | orchestrator | 2026-03-25 04:15:59.154700 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-25 04:15:59.154707 | orchestrator | Wednesday 25 March 2026 04:13:24 +0000 (0:00:04.641) 0:00:08.867 ******* 2026-03-25 04:15:59.154714 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154721 | orchestrator | 2026-03-25 04:15:59.154727 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-25 04:15:59.154731 | orchestrator | Wednesday 25 March 2026 04:13:31 +0000 (0:00:07.648) 0:00:16.515 ******* 2026-03-25 04:15:59.154734 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154738 | orchestrator | 2026-03-25 04:15:59.154742 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-25 04:15:59.154746 | orchestrator | Wednesday 25 March 2026 04:13:36 +0000 (0:00:04.500) 0:00:21.016 ******* 2026-03-25 04:15:59.154750 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154754 | orchestrator | 2026-03-25 04:15:59.154758 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-25 04:15:59.154761 | orchestrator | Wednesday 25 March 2026 04:13:40 +0000 (0:00:04.691) 0:00:25.707 ******* 2026-03-25 04:15:59.154766 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-25 04:15:59.154771 | orchestrator | changed: [localhost] => (item=member) 2026-03-25 04:15:59.154775 | orchestrator | changed: [localhost] => (item=creator) 2026-03-25 04:15:59.154779 | orchestrator | 2026-03-25 04:15:59.154783 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-25 04:15:59.154787 | orchestrator | Wednesday 25 March 2026 04:13:53 +0000 (0:00:12.992) 0:00:38.700 ******* 2026-03-25 04:15:59.154790 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154794 | orchestrator | 2026-03-25 04:15:59.154798 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-25 04:15:59.154801 | orchestrator | Wednesday 25 March 2026 04:13:58 +0000 (0:00:04.897) 0:00:43.598 ******* 2026-03-25 04:15:59.154805 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154809 | orchestrator | 2026-03-25 04:15:59.154812 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-25 04:15:59.154816 | orchestrator | Wednesday 25 March 2026 04:14:03 +0000 (0:00:05.216) 0:00:48.814 ******* 2026-03-25 04:15:59.154820 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154824 | orchestrator | 2026-03-25 04:15:59.154827 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-25 04:15:59.154831 | orchestrator | Wednesday 25 March 2026 04:14:08 +0000 (0:00:04.571) 0:00:53.385 ******* 2026-03-25 04:15:59.154835 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154838 | orchestrator | 2026-03-25 04:15:59.154842 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-25 04:15:59.154846 | orchestrator | Wednesday 25 March 2026 04:14:13 +0000 (0:00:04.444) 0:00:57.830 ******* 2026-03-25 04:15:59.154894 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154904 | orchestrator | 2026-03-25 04:15:59.154910 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-25 04:15:59.154917 | orchestrator | Wednesday 25 March 2026 04:14:17 +0000 (0:00:04.576) 0:01:02.406 ******* 2026-03-25 04:15:59.154924 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154931 | orchestrator | 2026-03-25 04:15:59.154938 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-25 04:15:59.154945 | orchestrator | Wednesday 25 March 2026 04:14:21 +0000 (0:00:04.353) 0:01:06.760 ******* 2026-03-25 04:15:59.154952 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154959 | orchestrator | 2026-03-25 04:15:59.154964 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-25 04:15:59.154968 | orchestrator | Wednesday 25 March 2026 04:14:26 +0000 (0:00:05.000) 0:01:11.761 ******* 2026-03-25 04:15:59.154972 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154982 | orchestrator | 2026-03-25 04:15:59.154986 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-25 04:15:59.154990 | orchestrator | Wednesday 25 March 2026 04:14:32 +0000 (0:00:05.923) 0:01:17.684 ******* 2026-03-25 04:15:59.154993 | orchestrator | changed: [localhost] 2026-03-25 04:15:59.154997 | orchestrator | 2026-03-25 04:15:59.155001 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-25 04:15:59.155004 | orchestrator | 2026-03-25 04:15:59.155008 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-25 04:15:59.155013 | orchestrator | Wednesday 25 March 2026 04:14:43 +0000 (0:00:10.335) 0:01:28.020 ******* 2026-03-25 04:15:59.155017 | orchestrator | ok: [localhost] 2026-03-25 04:15:59.155021 | orchestrator | 2026-03-25 04:15:59.155026 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-25 04:15:59.155030 | orchestrator | Wednesday 25 March 2026 04:14:47 +0000 (0:00:04.351) 0:01:32.372 ******* 2026-03-25 04:15:59.155034 | orchestrator | skipping: [localhost] 2026-03-25 04:15:59.155059 | orchestrator | 2026-03-25 04:15:59.155064 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-25 04:15:59.155079 | orchestrator | Wednesday 25 March 2026 04:14:47 +0000 (0:00:00.047) 0:01:32.419 ******* 2026-03-25 04:15:59.155083 | orchestrator | skipping: [localhost] 2026-03-25 04:15:59.155087 | orchestrator | 2026-03-25 04:15:59.155092 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-25 04:15:59.155096 | orchestrator | Wednesday 25 March 2026 04:14:47 +0000 (0:00:00.051) 0:01:32.470 ******* 2026-03-25 04:15:59.155100 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-25 04:15:59.155105 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-25 04:15:59.155124 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-25 04:15:59.155129 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-25 04:15:59.155133 | orchestrator | skipping: [localhost] => (item=test)  2026-03-25 04:15:59.155137 | orchestrator | skipping: [localhost] 2026-03-25 04:15:59.155141 | orchestrator | 2026-03-25 04:15:59.155145 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-25 04:15:59.155150 | orchestrator | Wednesday 25 March 2026 04:14:47 +0000 (0:00:00.197) 0:01:32.668 ******* 2026-03-25 04:15:59.155154 | orchestrator | skipping: [localhost] 2026-03-25 04:15:59.155158 | orchestrator | 2026-03-25 04:15:59.155162 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-25 04:15:59.155166 | orchestrator | Wednesday 25 March 2026 04:14:48 +0000 (0:00:00.173) 0:01:32.841 ******* 2026-03-25 04:15:59.155170 | orchestrator | changed: [localhost] => (item=test) 2026-03-25 04:15:59.155174 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-25 04:15:59.155179 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-25 04:15:59.155183 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-25 04:15:59.155187 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-25 04:15:59.155191 | orchestrator | 2026-03-25 04:15:59.155195 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-25 04:15:59.155199 | orchestrator | Wednesday 25 March 2026 04:14:54 +0000 (0:00:06.041) 0:01:38.882 ******* 2026-03-25 04:15:59.155204 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-25 04:15:59.155209 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-25 04:15:59.155213 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-25 04:15:59.155218 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-25 04:15:59.155224 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j334608962598.3733', 'results_file': '/ansible/.ansible_async/j334608962598.3733', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-25 04:15:59.155236 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j41026856386.3758', 'results_file': '/ansible/.ansible_async/j41026856386.3758', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-25 04:15:59.155241 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j124140561791.3783', 'results_file': '/ansible/.ansible_async/j124140561791.3783', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-25 04:15:59.155245 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j173059657874.3808', 'results_file': '/ansible/.ansible_async/j173059657874.3808', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-25 04:15:59.155250 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j126094457243.3833', 'results_file': '/ansible/.ansible_async/j126094457243.3833', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-25 04:15:59.155254 | orchestrator | 2026-03-25 04:15:59.155259 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-25 04:15:59.155263 | orchestrator | Wednesday 25 March 2026 04:15:42 +0000 (0:00:48.455) 0:02:27.338 ******* 2026-03-25 04:15:59.155267 | orchestrator | changed: [localhost] => (item=test) 2026-03-25 04:15:59.155272 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-25 04:15:59.155276 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-25 04:15:59.155280 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-25 04:15:59.155285 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-25 04:15:59.155289 | orchestrator | 2026-03-25 04:15:59.155294 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-25 04:15:59.155300 | orchestrator | Wednesday 25 March 2026 04:15:48 +0000 (0:00:06.174) 0:02:33.512 ******* 2026-03-25 04:15:59.155306 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-25 04:15:59.155317 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j46858120979.3937', 'results_file': '/ansible/.ansible_async/j46858120979.3937', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-25 04:15:59.155324 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j151295363294.3962', 'results_file': '/ansible/.ansible_async/j151295363294.3962', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-25 04:15:59.155331 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j25815767680.3987', 'results_file': '/ansible/.ansible_async/j25815767680.3987', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-25 04:15:59.155342 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j151326172809.4012', 'results_file': '/ansible/.ansible_async/j151326172809.4012', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-25 04:16:43.049380 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j77333984830.4037', 'results_file': '/ansible/.ansible_async/j77333984830.4037', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-25 04:16:43.049498 | orchestrator | 2026-03-25 04:16:43.049512 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-25 04:16:43.049522 | orchestrator | Wednesday 25 March 2026 04:15:59 +0000 (0:00:10.460) 0:02:43.972 ******* 2026-03-25 04:16:43.049529 | orchestrator | changed: [localhost] => (item=test) 2026-03-25 04:16:43.049538 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-25 04:16:43.049544 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-25 04:16:43.049550 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-25 04:16:43.049556 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-25 04:16:43.049582 | orchestrator | 2026-03-25 04:16:43.049589 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-25 04:16:43.049595 | orchestrator | Wednesday 25 March 2026 04:16:05 +0000 (0:00:06.518) 0:02:50.490 ******* 2026-03-25 04:16:43.049602 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-25 04:16:43.049611 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j74489891245.4106', 'results_file': '/ansible/.ansible_async/j74489891245.4106', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-25 04:16:43.049620 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j753488799061.4138', 'results_file': '/ansible/.ansible_async/j753488799061.4138', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-25 04:16:43.049626 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j886135464905.4164', 'results_file': '/ansible/.ansible_async/j886135464905.4164', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-25 04:16:43.049647 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j546751238031.4190', 'results_file': '/ansible/.ansible_async/j546751238031.4190', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-25 04:16:43.049653 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j46763312295.4216', 'results_file': '/ansible/.ansible_async/j46763312295.4216', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-25 04:16:43.049659 | orchestrator | 2026-03-25 04:16:43.049666 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-25 04:16:43.049672 | orchestrator | Wednesday 25 March 2026 04:16:16 +0000 (0:00:10.627) 0:03:01.118 ******* 2026-03-25 04:16:43.049678 | orchestrator | changed: [localhost] 2026-03-25 04:16:43.049686 | orchestrator | 2026-03-25 04:16:43.049690 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-25 04:16:43.049694 | orchestrator | Wednesday 25 March 2026 04:16:23 +0000 (0:00:06.941) 0:03:08.060 ******* 2026-03-25 04:16:43.049698 | orchestrator | changed: [localhost] 2026-03-25 04:16:43.049701 | orchestrator | 2026-03-25 04:16:43.049705 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-25 04:16:43.049709 | orchestrator | Wednesday 25 March 2026 04:16:37 +0000 (0:00:13.928) 0:03:21.988 ******* 2026-03-25 04:16:43.049713 | orchestrator | ok: [localhost] 2026-03-25 04:16:43.049717 | orchestrator | 2026-03-25 04:16:43.049721 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-25 04:16:43.049725 | orchestrator | Wednesday 25 March 2026 04:16:42 +0000 (0:00:05.497) 0:03:27.486 ******* 2026-03-25 04:16:43.049729 | orchestrator | ok: [localhost] => { 2026-03-25 04:16:43.049733 | orchestrator |  "msg": "192.168.112.153" 2026-03-25 04:16:43.049737 | orchestrator | } 2026-03-25 04:16:43.049741 | orchestrator | 2026-03-25 04:16:43.049744 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:16:43.049750 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 04:16:43.049754 | orchestrator | 2026-03-25 04:16:43.049758 | orchestrator | 2026-03-25 04:16:43.049762 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:16:43.049766 | orchestrator | Wednesday 25 March 2026 04:16:42 +0000 (0:00:00.049) 0:03:27.536 ******* 2026-03-25 04:16:43.049769 | orchestrator | =============================================================================== 2026-03-25 04:16:43.049773 | orchestrator | Wait for instance creation to complete --------------------------------- 48.46s 2026-03-25 04:16:43.049777 | orchestrator | Attach test volume ----------------------------------------------------- 13.93s 2026-03-25 04:16:43.049789 | orchestrator | Add member roles to user test ------------------------------------------ 12.99s 2026-03-25 04:16:43.049793 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.63s 2026-03-25 04:16:43.049797 | orchestrator | Wait for metadata to be added ------------------------------------------ 10.46s 2026-03-25 04:16:43.049801 | orchestrator | Create test router ----------------------------------------------------- 10.34s 2026-03-25 04:16:43.049804 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.65s 2026-03-25 04:16:43.049835 | orchestrator | Create test volume ------------------------------------------------------ 6.94s 2026-03-25 04:16:43.049839 | orchestrator | Add tag to instances ---------------------------------------------------- 6.52s 2026-03-25 04:16:43.049843 | orchestrator | Add metadata to instances ----------------------------------------------- 6.17s 2026-03-25 04:16:43.049847 | orchestrator | Create test instances --------------------------------------------------- 6.04s 2026-03-25 04:16:43.049851 | orchestrator | Create test subnet ------------------------------------------------------ 5.92s 2026-03-25 04:16:43.049855 | orchestrator | Create floating ip address ---------------------------------------------- 5.50s 2026-03-25 04:16:43.049858 | orchestrator | Create ssh security group ----------------------------------------------- 5.22s 2026-03-25 04:16:43.049862 | orchestrator | Create test network ----------------------------------------------------- 5.00s 2026-03-25 04:16:43.049866 | orchestrator | Create test server group ------------------------------------------------ 4.90s 2026-03-25 04:16:43.049869 | orchestrator | Create test user -------------------------------------------------------- 4.69s 2026-03-25 04:16:43.049873 | orchestrator | Create test-admin user -------------------------------------------------- 4.64s 2026-03-25 04:16:43.049877 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.58s 2026-03-25 04:16:43.049881 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.57s 2026-03-25 04:16:43.483228 | orchestrator | + server_list 2026-03-25 04:16:43.483315 | orchestrator | + openstack --os-cloud test server list 2026-03-25 04:16:47.348353 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-25 04:16:47.348425 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-25 04:16:47.348431 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-25 04:16:47.348435 | orchestrator | | 354814f4-88dd-44d5-b21f-a8cf4a736aee | test-3 | ACTIVE | test=192.168.112.182, 192.168.200.224 | N/A (booted from volume) | SCS-1L-1 | 2026-03-25 04:16:47.348439 | orchestrator | | 354b93df-cdf9-4542-8a52-e30074fbeaea | test-2 | ACTIVE | test=192.168.112.169, 192.168.200.58 | N/A (booted from volume) | SCS-1L-1 | 2026-03-25 04:16:47.348444 | orchestrator | | 542755dd-b5f1-4c2b-b8d7-4cbb3c5c743b | test-4 | ACTIVE | test=192.168.112.128, 192.168.200.98 | N/A (booted from volume) | SCS-1L-1 | 2026-03-25 04:16:47.348448 | orchestrator | | e4f33962-ee9e-4f49-b5fe-65f7eeac5c87 | test-1 | ACTIVE | test=192.168.112.119, 192.168.200.60 | N/A (booted from volume) | SCS-1L-1 | 2026-03-25 04:16:47.348451 | orchestrator | | e9c66d52-063a-4d98-bd4e-35ee0ebbc599 | test | ACTIVE | test=192.168.112.153, 192.168.200.87 | N/A (booted from volume) | SCS-1L-1 | 2026-03-25 04:16:47.348455 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-25 04:16:47.698726 | orchestrator | + openstack --os-cloud test server show test 2026-03-25 04:16:51.103112 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:16:51.103190 | orchestrator | | Field | Value | 2026-03-25 04:16:51.103215 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:16:51.103226 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-25 04:16:51.103233 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-25 04:16:51.103239 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-25 04:16:51.103245 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-25 04:16:51.103251 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-25 04:16:51.103257 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-25 04:16:51.103277 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-25 04:16:51.103284 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-25 04:16:51.103296 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-25 04:16:51.103302 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-25 04:16:51.103312 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-25 04:16:51.103318 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-25 04:16:51.103324 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-25 04:16:51.103329 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-25 04:16:51.103336 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-25 04:16:51.103342 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-25T04:15:22.000000 | 2026-03-25 04:16:51.103354 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-25 04:16:51.103370 | orchestrator | | accessIPv4 | | 2026-03-25 04:16:51.103374 | orchestrator | | accessIPv6 | | 2026-03-25 04:16:51.103379 | orchestrator | | addresses | test=192.168.112.153, 192.168.200.87 | 2026-03-25 04:16:51.103385 | orchestrator | | config_drive | | 2026-03-25 04:16:51.103389 | orchestrator | | created | 2026-03-25T04:14:58Z | 2026-03-25 04:16:51.103393 | orchestrator | | description | None | 2026-03-25 04:16:51.103397 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-25 04:16:51.103400 | orchestrator | | hostId | 1617614d09198053816a01e3a979b075273f15c39f5a72ede667937e | 2026-03-25 04:16:51.103404 | orchestrator | | host_status | None | 2026-03-25 04:16:51.103415 | orchestrator | | id | e9c66d52-063a-4d98-bd4e-35ee0ebbc599 | 2026-03-25 04:16:51.103419 | orchestrator | | image | N/A (booted from volume) | 2026-03-25 04:16:51.103423 | orchestrator | | key_name | test | 2026-03-25 04:16:51.103429 | orchestrator | | locked | False | 2026-03-25 04:16:51.103452 | orchestrator | | locked_reason | None | 2026-03-25 04:16:51.103456 | orchestrator | | name | test | 2026-03-25 04:16:51.103460 | orchestrator | | pinned_availability_zone | None | 2026-03-25 04:16:51.103464 | orchestrator | | progress | 0 | 2026-03-25 04:16:51.103468 | orchestrator | | project_id | 424a6ccd4783408bb170c8e3d27e31e2 | 2026-03-25 04:16:51.103475 | orchestrator | | properties | hostname='test' | 2026-03-25 04:16:51.103484 | orchestrator | | security_groups | name='ssh' | 2026-03-25 04:16:51.103488 | orchestrator | | | name='icmp' | 2026-03-25 04:16:51.103492 | orchestrator | | server_groups | None | 2026-03-25 04:16:51.103496 | orchestrator | | status | ACTIVE | 2026-03-25 04:16:51.103500 | orchestrator | | tags | test | 2026-03-25 04:16:51.103511 | orchestrator | | trusted_image_certificates | None | 2026-03-25 04:16:51.103515 | orchestrator | | updated | 2026-03-25T04:15:49Z | 2026-03-25 04:16:51.103518 | orchestrator | | user_id | 4b6d2be88b9843f1b211f9a0e90ecfa9 | 2026-03-25 04:16:51.103522 | orchestrator | | volumes_attached | delete_on_termination='True', id='68d1d227-d62f-4110-b748-4383a9ae177f' | 2026-03-25 04:16:51.103530 | orchestrator | | | delete_on_termination='False', id='0cdd695e-5086-47e5-9e84-28d14e895f83' | 2026-03-25 04:16:51.108064 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:16:51.464448 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-25 04:16:54.625690 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:16:54.625771 | orchestrator | | Field | Value | 2026-03-25 04:16:54.625798 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:16:54.625808 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-25 04:16:54.625815 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-25 04:16:54.625822 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-25 04:16:54.625829 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-25 04:16:54.625853 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-25 04:16:54.625860 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-25 04:16:54.625881 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-25 04:16:54.625888 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-25 04:16:54.625895 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-25 04:16:54.625906 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-25 04:16:54.625913 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-25 04:16:54.625920 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-25 04:16:54.625926 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-25 04:16:54.625940 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-25 04:16:54.625944 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-25 04:16:54.625948 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-25T04:15:24.000000 | 2026-03-25 04:16:54.625957 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-25 04:16:54.625962 | orchestrator | | accessIPv4 | | 2026-03-25 04:16:54.625966 | orchestrator | | accessIPv6 | | 2026-03-25 04:16:54.625973 | orchestrator | | addresses | test=192.168.112.119, 192.168.200.60 | 2026-03-25 04:16:54.625977 | orchestrator | | config_drive | | 2026-03-25 04:16:54.625981 | orchestrator | | created | 2026-03-25T04:14:59Z | 2026-03-25 04:16:54.625989 | orchestrator | | description | None | 2026-03-25 04:16:54.625993 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-25 04:16:54.625997 | orchestrator | | hostId | 1617614d09198053816a01e3a979b075273f15c39f5a72ede667937e | 2026-03-25 04:16:54.626002 | orchestrator | | host_status | None | 2026-03-25 04:16:54.626011 | orchestrator | | id | e4f33962-ee9e-4f49-b5fe-65f7eeac5c87 | 2026-03-25 04:16:54.626153 | orchestrator | | image | N/A (booted from volume) | 2026-03-25 04:16:54.626162 | orchestrator | | key_name | test | 2026-03-25 04:16:54.626173 | orchestrator | | locked | False | 2026-03-25 04:16:54.626180 | orchestrator | | locked_reason | None | 2026-03-25 04:16:54.626188 | orchestrator | | name | test-1 | 2026-03-25 04:16:54.626201 | orchestrator | | pinned_availability_zone | None | 2026-03-25 04:16:54.626208 | orchestrator | | progress | 0 | 2026-03-25 04:16:54.626216 | orchestrator | | project_id | 424a6ccd4783408bb170c8e3d27e31e2 | 2026-03-25 04:16:54.626224 | orchestrator | | properties | hostname='test-1' | 2026-03-25 04:16:54.626239 | orchestrator | | security_groups | name='ssh' | 2026-03-25 04:16:54.626246 | orchestrator | | | name='icmp' | 2026-03-25 04:16:54.626253 | orchestrator | | server_groups | None | 2026-03-25 04:16:54.626264 | orchestrator | | status | ACTIVE | 2026-03-25 04:16:54.626271 | orchestrator | | tags | test | 2026-03-25 04:16:54.626291 | orchestrator | | trusted_image_certificates | None | 2026-03-25 04:16:54.626299 | orchestrator | | updated | 2026-03-25T04:15:50Z | 2026-03-25 04:16:54.626306 | orchestrator | | user_id | 4b6d2be88b9843f1b211f9a0e90ecfa9 | 2026-03-25 04:16:54.626313 | orchestrator | | volumes_attached | delete_on_termination='True', id='a6e654c7-900c-4bf5-a768-e8682888e824' | 2026-03-25 04:16:54.630957 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:16:55.019782 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-25 04:16:58.349358 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:16:58.349449 | orchestrator | | Field | Value | 2026-03-25 04:16:58.349458 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:16:58.349464 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-25 04:16:58.349483 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-25 04:16:58.349489 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-25 04:16:58.349493 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-25 04:16:58.349498 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-25 04:16:58.349514 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-25 04:16:58.349533 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-25 04:16:58.349540 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-25 04:16:58.349547 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-25 04:16:58.349557 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-25 04:16:58.349572 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-25 04:16:58.349577 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-25 04:16:58.349581 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-25 04:16:58.349586 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-25 04:16:58.349590 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-25 04:16:58.349595 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-25T04:15:25.000000 | 2026-03-25 04:16:58.349604 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-25 04:16:58.349609 | orchestrator | | accessIPv4 | | 2026-03-25 04:16:58.349613 | orchestrator | | accessIPv6 | | 2026-03-25 04:16:58.349624 | orchestrator | | addresses | test=192.168.112.169, 192.168.200.58 | 2026-03-25 04:16:58.349629 | orchestrator | | config_drive | | 2026-03-25 04:16:58.349633 | orchestrator | | created | 2026-03-25T04:15:00Z | 2026-03-25 04:16:58.349638 | orchestrator | | description | None | 2026-03-25 04:16:58.349642 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-25 04:16:58.349647 | orchestrator | | hostId | e841353f04549ad5bcc513b1d86c57609540176eabeb5897de17cc59 | 2026-03-25 04:16:58.349651 | orchestrator | | host_status | None | 2026-03-25 04:16:58.349660 | orchestrator | | id | 354b93df-cdf9-4542-8a52-e30074fbeaea | 2026-03-25 04:16:58.349665 | orchestrator | | image | N/A (booted from volume) | 2026-03-25 04:16:58.349669 | orchestrator | | key_name | test | 2026-03-25 04:16:58.349680 | orchestrator | | locked | False | 2026-03-25 04:16:58.349685 | orchestrator | | locked_reason | None | 2026-03-25 04:16:58.349689 | orchestrator | | name | test-2 | 2026-03-25 04:16:58.349694 | orchestrator | | pinned_availability_zone | None | 2026-03-25 04:16:58.349698 | orchestrator | | progress | 0 | 2026-03-25 04:16:58.349703 | orchestrator | | project_id | 424a6ccd4783408bb170c8e3d27e31e2 | 2026-03-25 04:16:58.349707 | orchestrator | | properties | hostname='test-2' | 2026-03-25 04:16:58.349715 | orchestrator | | security_groups | name='ssh' | 2026-03-25 04:16:58.349720 | orchestrator | | | name='icmp' | 2026-03-25 04:16:58.349728 | orchestrator | | server_groups | None | 2026-03-25 04:16:58.349735 | orchestrator | | status | ACTIVE | 2026-03-25 04:16:58.349740 | orchestrator | | tags | test | 2026-03-25 04:16:58.349744 | orchestrator | | trusted_image_certificates | None | 2026-03-25 04:16:58.349749 | orchestrator | | updated | 2026-03-25T04:15:51Z | 2026-03-25 04:16:58.349753 | orchestrator | | user_id | 4b6d2be88b9843f1b211f9a0e90ecfa9 | 2026-03-25 04:16:58.349760 | orchestrator | | volumes_attached | delete_on_termination='True', id='cabc6fc3-6b20-4364-a5a0-6f3604aa5919' | 2026-03-25 04:16:58.349767 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:16:58.731796 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-25 04:17:02.266656 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:17:02.266775 | orchestrator | | Field | Value | 2026-03-25 04:17:02.266793 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:17:02.266814 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-25 04:17:02.266820 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-25 04:17:02.266827 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-25 04:17:02.266833 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-25 04:17:02.266840 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-25 04:17:02.266846 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-25 04:17:02.266867 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-25 04:17:02.266880 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-25 04:17:02.266886 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-25 04:17:02.266893 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-25 04:17:02.266902 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-25 04:17:02.266909 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-25 04:17:02.266915 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-25 04:17:02.266922 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-25 04:17:02.266928 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-25 04:17:02.266934 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-25T04:15:25.000000 | 2026-03-25 04:17:02.266949 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-25 04:17:02.266956 | orchestrator | | accessIPv4 | | 2026-03-25 04:17:02.266963 | orchestrator | | accessIPv6 | | 2026-03-25 04:17:02.266970 | orchestrator | | addresses | test=192.168.112.182, 192.168.200.224 | 2026-03-25 04:17:02.266977 | orchestrator | | config_drive | | 2026-03-25 04:17:02.266983 | orchestrator | | created | 2026-03-25T04:15:00Z | 2026-03-25 04:17:02.266990 | orchestrator | | description | None | 2026-03-25 04:17:02.266996 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-25 04:17:02.267003 | orchestrator | | hostId | c5510997e390fe873bc136c1e681e7db7ab7826b3d1633cceb9989b0 | 2026-03-25 04:17:02.267071 | orchestrator | | host_status | None | 2026-03-25 04:17:02.267090 | orchestrator | | id | 354814f4-88dd-44d5-b21f-a8cf4a736aee | 2026-03-25 04:17:02.267459 | orchestrator | | image | N/A (booted from volume) | 2026-03-25 04:17:02.267480 | orchestrator | | key_name | test | 2026-03-25 04:17:02.267487 | orchestrator | | locked | False | 2026-03-25 04:17:02.267494 | orchestrator | | locked_reason | None | 2026-03-25 04:17:02.267500 | orchestrator | | name | test-3 | 2026-03-25 04:17:02.267507 | orchestrator | | pinned_availability_zone | None | 2026-03-25 04:17:02.267514 | orchestrator | | progress | 0 | 2026-03-25 04:17:02.267520 | orchestrator | | project_id | 424a6ccd4783408bb170c8e3d27e31e2 | 2026-03-25 04:17:02.267534 | orchestrator | | properties | hostname='test-3' | 2026-03-25 04:17:02.267549 | orchestrator | | security_groups | name='ssh' | 2026-03-25 04:17:02.267561 | orchestrator | | | name='icmp' | 2026-03-25 04:17:02.267568 | orchestrator | | server_groups | None | 2026-03-25 04:17:02.267574 | orchestrator | | status | ACTIVE | 2026-03-25 04:17:02.267580 | orchestrator | | tags | test | 2026-03-25 04:17:02.267587 | orchestrator | | trusted_image_certificates | None | 2026-03-25 04:17:02.267593 | orchestrator | | updated | 2026-03-25T04:15:52Z | 2026-03-25 04:17:02.267600 | orchestrator | | user_id | 4b6d2be88b9843f1b211f9a0e90ecfa9 | 2026-03-25 04:17:02.267610 | orchestrator | | volumes_attached | delete_on_termination='True', id='4e77d57b-b8fb-4952-ade5-254edb26445e' | 2026-03-25 04:17:02.272648 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:17:02.668692 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-25 04:17:06.013872 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:17:06.013956 | orchestrator | | Field | Value | 2026-03-25 04:17:06.013963 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:17:06.013967 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-25 04:17:06.013971 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-25 04:17:06.013975 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-25 04:17:06.013979 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-25 04:17:06.013997 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-25 04:17:06.014003 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-25 04:17:06.014129 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-25 04:17:06.014144 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-25 04:17:06.014151 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-25 04:17:06.014157 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-25 04:17:06.014163 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-25 04:17:06.014169 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-25 04:17:06.014175 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-25 04:17:06.014187 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-25 04:17:06.014194 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-25 04:17:06.014200 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-25T04:15:24.000000 | 2026-03-25 04:17:06.014213 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-25 04:17:06.014223 | orchestrator | | accessIPv4 | | 2026-03-25 04:17:06.014230 | orchestrator | | accessIPv6 | | 2026-03-25 04:17:06.014234 | orchestrator | | addresses | test=192.168.112.128, 192.168.200.98 | 2026-03-25 04:17:06.014238 | orchestrator | | config_drive | | 2026-03-25 04:17:06.014242 | orchestrator | | created | 2026-03-25T04:15:00Z | 2026-03-25 04:17:06.014246 | orchestrator | | description | None | 2026-03-25 04:17:06.014254 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-25 04:17:06.014257 | orchestrator | | hostId | e841353f04549ad5bcc513b1d86c57609540176eabeb5897de17cc59 | 2026-03-25 04:17:06.014261 | orchestrator | | host_status | None | 2026-03-25 04:17:06.014271 | orchestrator | | id | 542755dd-b5f1-4c2b-b8d7-4cbb3c5c743b | 2026-03-25 04:17:06.014277 | orchestrator | | image | N/A (booted from volume) | 2026-03-25 04:17:06.014281 | orchestrator | | key_name | test | 2026-03-25 04:17:06.014285 | orchestrator | | locked | False | 2026-03-25 04:17:06.014289 | orchestrator | | locked_reason | None | 2026-03-25 04:17:06.014293 | orchestrator | | name | test-4 | 2026-03-25 04:17:06.014299 | orchestrator | | pinned_availability_zone | None | 2026-03-25 04:17:06.014303 | orchestrator | | progress | 0 | 2026-03-25 04:17:06.014307 | orchestrator | | project_id | 424a6ccd4783408bb170c8e3d27e31e2 | 2026-03-25 04:17:06.014311 | orchestrator | | properties | hostname='test-4' | 2026-03-25 04:17:06.014319 | orchestrator | | security_groups | name='ssh' | 2026-03-25 04:17:06.014325 | orchestrator | | | name='icmp' | 2026-03-25 04:17:06.014330 | orchestrator | | server_groups | None | 2026-03-25 04:17:06.014334 | orchestrator | | status | ACTIVE | 2026-03-25 04:17:06.014338 | orchestrator | | tags | test | 2026-03-25 04:17:06.014348 | orchestrator | | trusted_image_certificates | None | 2026-03-25 04:17:06.014354 | orchestrator | | updated | 2026-03-25T04:15:53Z | 2026-03-25 04:17:06.014383 | orchestrator | | user_id | 4b6d2be88b9843f1b211f9a0e90ecfa9 | 2026-03-25 04:17:06.014390 | orchestrator | | volumes_attached | delete_on_termination='True', id='d093834c-4886-48a4-9b50-940c79a3d3b5' | 2026-03-25 04:17:06.019939 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-25 04:17:06.367206 | orchestrator | + server_ping 2026-03-25 04:17:06.367755 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-25 04:17:06.367773 | orchestrator | ++ tr -d '\r' 2026-03-25 04:17:09.532078 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-25 04:17:09.532186 | orchestrator | + ping -c3 192.168.112.169 2026-03-25 04:17:09.545436 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-03-25 04:17:09.545529 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=5.88 ms 2026-03-25 04:17:10.542543 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.19 ms 2026-03-25 04:17:11.544774 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.11 ms 2026-03-25 04:17:11.544869 | orchestrator | 2026-03-25 04:17:11.544885 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-03-25 04:17:11.544917 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-25 04:17:11.544929 | orchestrator | rtt min/avg/max/mdev = 2.108/3.393/5.884/1.761 ms 2026-03-25 04:17:11.544940 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-25 04:17:11.544953 | orchestrator | + ping -c3 192.168.112.119 2026-03-25 04:17:11.557974 | orchestrator | PING 192.168.112.119 (192.168.112.119) 56(84) bytes of data. 2026-03-25 04:17:11.558146 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=1 ttl=63 time=8.79 ms 2026-03-25 04:17:12.553184 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=2 ttl=63 time=1.98 ms 2026-03-25 04:17:13.554509 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=3 ttl=63 time=1.64 ms 2026-03-25 04:17:13.554592 | orchestrator | 2026-03-25 04:17:13.554599 | orchestrator | --- 192.168.112.119 ping statistics --- 2026-03-25 04:17:13.554605 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-25 04:17:13.554629 | orchestrator | rtt min/avg/max/mdev = 1.644/4.136/8.788/3.291 ms 2026-03-25 04:17:13.555062 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-25 04:17:13.555147 | orchestrator | + ping -c3 192.168.112.128 2026-03-25 04:17:13.569132 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-03-25 04:17:13.569211 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=9.33 ms 2026-03-25 04:17:14.563823 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.39 ms 2026-03-25 04:17:15.565072 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=1.80 ms 2026-03-25 04:17:15.565166 | orchestrator | 2026-03-25 04:17:15.565177 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-03-25 04:17:15.565187 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-25 04:17:15.565193 | orchestrator | rtt min/avg/max/mdev = 1.795/4.505/9.330/3.420 ms 2026-03-25 04:17:15.565201 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-25 04:17:15.565208 | orchestrator | + ping -c3 192.168.112.153 2026-03-25 04:17:15.576445 | orchestrator | PING 192.168.112.153 (192.168.112.153) 56(84) bytes of data. 2026-03-25 04:17:15.576524 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=1 ttl=63 time=8.68 ms 2026-03-25 04:17:16.571879 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=2 ttl=63 time=2.41 ms 2026-03-25 04:17:17.571863 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=3 ttl=63 time=1.70 ms 2026-03-25 04:17:17.571936 | orchestrator | 2026-03-25 04:17:17.571950 | orchestrator | --- 192.168.112.153 ping statistics --- 2026-03-25 04:17:17.571956 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-25 04:17:17.571961 | orchestrator | rtt min/avg/max/mdev = 1.704/4.262/8.678/3.135 ms 2026-03-25 04:17:17.572420 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-25 04:17:17.572441 | orchestrator | + ping -c3 192.168.112.182 2026-03-25 04:17:17.584560 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-03-25 04:17:17.584663 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=7.86 ms 2026-03-25 04:17:18.580817 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.74 ms 2026-03-25 04:17:19.582331 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.81 ms 2026-03-25 04:17:19.582422 | orchestrator | 2026-03-25 04:17:19.582431 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-03-25 04:17:19.582437 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-25 04:17:19.582441 | orchestrator | rtt min/avg/max/mdev = 1.808/4.136/7.858/2.659 ms 2026-03-25 04:17:19.582859 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-25 04:17:20.026984 | orchestrator | ok: Runtime: 0:08:32.618185 2026-03-25 04:17:20.079691 | 2026-03-25 04:17:20.079873 | TASK [Run tempest] 2026-03-25 04:17:20.615083 | orchestrator | skipping: Conditional result was False 2026-03-25 04:17:20.633294 | 2026-03-25 04:17:20.633459 | TASK [Check prometheus alert status] 2026-03-25 04:17:21.171168 | orchestrator | skipping: Conditional result was False 2026-03-25 04:17:21.191052 | 2026-03-25 04:17:21.191233 | PLAY [Upgrade testbed] 2026-03-25 04:17:21.203751 | 2026-03-25 04:17:21.204005 | TASK [Print next ceph version] 2026-03-25 04:17:21.286376 | orchestrator | ok 2026-03-25 04:17:21.293612 | 2026-03-25 04:17:21.293741 | TASK [Print next openstack version] 2026-03-25 04:17:21.366959 | orchestrator | ok 2026-03-25 04:17:21.376870 | 2026-03-25 04:17:21.377050 | TASK [Print next manager version] 2026-03-25 04:17:21.447085 | orchestrator | ok 2026-03-25 04:17:21.457742 | 2026-03-25 04:17:21.457867 | TASK [Set cloud fact (Zuul deployment)] 2026-03-25 04:17:21.516460 | orchestrator | ok 2026-03-25 04:17:21.527950 | 2026-03-25 04:17:21.528078 | TASK [Set cloud fact (local deployment)] 2026-03-25 04:17:21.564677 | orchestrator | skipping: Conditional result was False 2026-03-25 04:17:21.580275 | 2026-03-25 04:17:21.580411 | TASK [Fetch manager address] 2026-03-25 04:17:21.878236 | orchestrator | ok 2026-03-25 04:17:21.887318 | 2026-03-25 04:17:21.887487 | TASK [Set manager_host address] 2026-03-25 04:17:21.967275 | orchestrator | ok 2026-03-25 04:17:21.979399 | 2026-03-25 04:17:21.979547 | TASK [Run upgrade] 2026-03-25 04:17:22.628758 | orchestrator | + set -e 2026-03-25 04:17:22.628951 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-25 04:17:22.628966 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-25 04:17:22.628979 | orchestrator | + CEPH_VERSION=reef 2026-03-25 04:17:22.628988 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-25 04:17:22.628996 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-25 04:17:22.629067 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-03-25 04:17:22.634914 | orchestrator | + set -e 2026-03-25 04:17:22.635019 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 04:17:22.635034 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 04:17:22.635045 | orchestrator | ++ INTERACTIVE=false 2026-03-25 04:17:22.635052 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 04:17:22.635065 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 04:17:22.635687 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-03-25 04:17:22.664207 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-03-25 04:17:22.664834 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-25 04:17:22.694778 | orchestrator | 2026-03-25 04:17:22.694871 | orchestrator | # UPGRADE MANAGER 2026-03-25 04:17:22.694888 | orchestrator | 2026-03-25 04:17:22.694898 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-03-25 04:17:22.694907 | orchestrator | + echo 2026-03-25 04:17:22.694916 | orchestrator | + echo '# UPGRADE MANAGER' 2026-03-25 04:17:22.694934 | orchestrator | + echo 2026-03-25 04:17:22.694944 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-25 04:17:22.694954 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-25 04:17:22.694963 | orchestrator | + CEPH_VERSION=reef 2026-03-25 04:17:22.694972 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-25 04:17:22.694981 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-25 04:17:22.694990 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-03-25 04:17:22.698727 | orchestrator | + set -e 2026-03-25 04:17:22.698794 | orchestrator | + VERSION=10.0.0-rc.1 2026-03-25 04:17:22.698802 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-25 04:17:22.701489 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-03-25 04:17:22.701531 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-25 04:17:22.704613 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-25 04:17:22.707609 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-25 04:17:22.713337 | orchestrator | /opt/configuration ~ 2026-03-25 04:17:22.713407 | orchestrator | + set -e 2026-03-25 04:17:22.713414 | orchestrator | + pushd /opt/configuration 2026-03-25 04:17:22.713421 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-25 04:17:22.713430 | orchestrator | + source /opt/venv/bin/activate 2026-03-25 04:17:22.714120 | orchestrator | ++ deactivate nondestructive 2026-03-25 04:17:22.714217 | orchestrator | ++ '[' -n '' ']' 2026-03-25 04:17:22.714684 | orchestrator | ++ '[' -n '' ']' 2026-03-25 04:17:22.714695 | orchestrator | ++ hash -r 2026-03-25 04:17:22.714700 | orchestrator | ++ '[' -n '' ']' 2026-03-25 04:17:22.714705 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-25 04:17:22.714710 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-25 04:17:22.714715 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-25 04:17:22.714721 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-25 04:17:22.714726 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-25 04:17:22.714731 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-25 04:17:22.714735 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-25 04:17:22.714741 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 04:17:22.714746 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 04:17:22.714751 | orchestrator | ++ export PATH 2026-03-25 04:17:22.714756 | orchestrator | ++ '[' -n '' ']' 2026-03-25 04:17:22.714760 | orchestrator | ++ '[' -z '' ']' 2026-03-25 04:17:22.714766 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-25 04:17:22.714774 | orchestrator | ++ PS1='(venv) ' 2026-03-25 04:17:22.714782 | orchestrator | ++ export PS1 2026-03-25 04:17:22.714788 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-25 04:17:22.714792 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-25 04:17:22.714797 | orchestrator | ++ hash -r 2026-03-25 04:17:22.714805 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-25 04:17:23.946956 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-25 04:17:23.948269 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-25 04:17:23.950176 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-25 04:17:23.952202 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-25 04:17:23.954249 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-25 04:17:23.973258 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-25 04:17:23.975467 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-25 04:17:23.977966 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-25 04:17:23.979789 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-25 04:17:24.036753 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-25 04:17:24.039472 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-25 04:17:24.041913 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-25 04:17:24.043476 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-25 04:17:24.048105 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-25 04:17:24.370989 | orchestrator | ++ which gilt 2026-03-25 04:17:24.372838 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-25 04:17:24.372881 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-25 04:17:24.644339 | orchestrator | osism.cfg-generics: 2026-03-25 04:17:24.748404 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-25 04:17:24.749461 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-25 04:17:24.750635 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-25 04:17:24.750686 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-25 04:17:25.911603 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-25 04:17:25.921843 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-25 04:17:26.335319 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-25 04:17:26.400504 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-25 04:17:26.400601 | orchestrator | + deactivate 2026-03-25 04:17:26.400608 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-25 04:17:26.400615 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 04:17:26.400619 | orchestrator | + export PATH 2026-03-25 04:17:26.400624 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-25 04:17:26.400628 | orchestrator | + '[' -n '' ']' 2026-03-25 04:17:26.400632 | orchestrator | + hash -r 2026-03-25 04:17:26.400636 | orchestrator | + '[' -n '' ']' 2026-03-25 04:17:26.400639 | orchestrator | + unset VIRTUAL_ENV 2026-03-25 04:17:26.400643 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-25 04:17:26.400647 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-25 04:17:26.400651 | orchestrator | + unset -f deactivate 2026-03-25 04:17:26.400655 | orchestrator | + popd 2026-03-25 04:17:26.400666 | orchestrator | ~ 2026-03-25 04:17:26.402376 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-03-25 04:17:26.402519 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-25 04:17:26.407104 | orchestrator | + set -e 2026-03-25 04:17:26.407176 | orchestrator | + NAMESPACE=kolla/release 2026-03-25 04:17:26.407190 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-25 04:17:26.413601 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-25 04:17:26.418732 | orchestrator | /opt/configuration ~ 2026-03-25 04:17:26.418858 | orchestrator | + set -e 2026-03-25 04:17:26.418869 | orchestrator | + pushd /opt/configuration 2026-03-25 04:17:26.418877 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-25 04:17:26.418885 | orchestrator | + source /opt/venv/bin/activate 2026-03-25 04:17:26.418939 | orchestrator | ++ deactivate nondestructive 2026-03-25 04:17:26.418947 | orchestrator | ++ '[' -n '' ']' 2026-03-25 04:17:26.418953 | orchestrator | ++ '[' -n '' ']' 2026-03-25 04:17:26.418960 | orchestrator | ++ hash -r 2026-03-25 04:17:26.418977 | orchestrator | ++ '[' -n '' ']' 2026-03-25 04:17:26.418983 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-25 04:17:26.418990 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-25 04:17:26.419019 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-25 04:17:26.419026 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-25 04:17:26.419033 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-25 04:17:26.419039 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-25 04:17:26.419064 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-25 04:17:26.419072 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 04:17:26.419087 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 04:17:26.419095 | orchestrator | ++ export PATH 2026-03-25 04:17:26.419101 | orchestrator | ++ '[' -n '' ']' 2026-03-25 04:17:26.419111 | orchestrator | ++ '[' -z '' ']' 2026-03-25 04:17:26.419117 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-25 04:17:26.419123 | orchestrator | ++ PS1='(venv) ' 2026-03-25 04:17:26.419129 | orchestrator | ++ export PS1 2026-03-25 04:17:26.419136 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-25 04:17:26.419143 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-25 04:17:26.419148 | orchestrator | ++ hash -r 2026-03-25 04:17:26.419156 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-25 04:17:27.036025 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-25 04:17:27.038416 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-25 04:17:27.041080 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-25 04:17:27.046715 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-25 04:17:27.046802 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-25 04:17:27.063476 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-25 04:17:27.066365 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-25 04:17:27.068691 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-25 04:17:27.071423 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-25 04:17:27.128280 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-25 04:17:27.130810 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-25 04:17:27.133208 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-25 04:17:27.134961 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-25 04:17:27.139832 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-25 04:17:27.428759 | orchestrator | ++ which gilt 2026-03-25 04:17:27.430341 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-25 04:17:27.430379 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-25 04:17:27.618186 | orchestrator | osism.cfg-generics: 2026-03-25 04:17:27.681955 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-25 04:17:27.682179 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-25 04:17:27.682252 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-25 04:17:27.682262 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-25 04:17:28.356129 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-25 04:17:28.365912 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-25 04:17:28.757622 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-25 04:17:28.829378 | orchestrator | ~ 2026-03-25 04:17:28.829446 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-25 04:17:28.829454 | orchestrator | + deactivate 2026-03-25 04:17:28.829476 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-25 04:17:28.829483 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-25 04:17:28.829487 | orchestrator | + export PATH 2026-03-25 04:17:28.829492 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-25 04:17:28.829497 | orchestrator | + '[' -n '' ']' 2026-03-25 04:17:28.829501 | orchestrator | + hash -r 2026-03-25 04:17:28.829505 | orchestrator | + '[' -n '' ']' 2026-03-25 04:17:28.829510 | orchestrator | + unset VIRTUAL_ENV 2026-03-25 04:17:28.829515 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-25 04:17:28.829519 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-25 04:17:28.829523 | orchestrator | + unset -f deactivate 2026-03-25 04:17:28.829528 | orchestrator | + popd 2026-03-25 04:17:28.830790 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-03-25 04:17:28.884928 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-25 04:17:28.886045 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-25 04:17:28.985076 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 04:17:28.985155 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-25 04:17:28.988089 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-25 04:17:28.995381 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-03-25 04:17:29.075577 | orchestrator | ++ '[' -1 -le 0 ']' 2026-03-25 04:17:29.075749 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-03-25 04:17:29.187206 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-03-25 04:17:29.187297 | orchestrator | ++ echo true 2026-03-25 04:17:29.187303 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-03-25 04:17:29.188109 | orchestrator | +++ semver 2024.2 2024.2 2026-03-25 04:17:29.239056 | orchestrator | ++ '[' 0 -le 0 ']' 2026-03-25 04:17:29.239177 | orchestrator | +++ semver 2024.2 2025.1 2026-03-25 04:17:29.276456 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-03-25 04:17:29.276533 | orchestrator | ++ echo false 2026-03-25 04:17:29.276606 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-03-25 04:17:29.276614 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-25 04:17:29.276619 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-03-25 04:17:29.276625 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-03-25 04:17:29.276695 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-03-25 04:17:29.281669 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-03-25 04:17:29.281748 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-03-25 04:17:29.297428 | orchestrator | export RABBITMQ3TO4=true 2026-03-25 04:17:29.300340 | orchestrator | + osism update manager 2026-03-25 04:17:36.490371 | orchestrator | Collecting uv 2026-03-25 04:17:36.598100 | orchestrator | Downloading uv-0.11.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-03-25 04:17:36.617419 | orchestrator | Downloading uv-0.11.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.5 MB) 2026-03-25 04:17:37.478516 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.5/24.5 MB 36.0 MB/s eta 0:00:00 2026-03-25 04:17:37.545602 | orchestrator | Installing collected packages: uv 2026-03-25 04:17:38.057807 | orchestrator | Successfully installed uv-0.11.1 2026-03-25 04:17:38.850899 | orchestrator | Resolved 11 packages in 385ms 2026-03-25 04:17:38.884938 | orchestrator | Downloading cryptography (4.3MiB) 2026-03-25 04:17:38.885083 | orchestrator | Downloading netaddr (2.2MiB) 2026-03-25 04:17:38.885093 | orchestrator | Downloading ansible-core (2.1MiB) 2026-03-25 04:17:38.985080 | orchestrator | Downloading ansible (54.5MiB) 2026-03-25 04:17:39.213188 | orchestrator | Downloaded netaddr 2026-03-25 04:17:39.299433 | orchestrator | Downloaded cryptography 2026-03-25 04:17:39.353652 | orchestrator | Downloaded ansible-core 2026-03-25 04:17:47.299872 | orchestrator | Downloaded ansible 2026-03-25 04:17:47.299965 | orchestrator | Prepared 11 packages in 8.44s 2026-03-25 04:17:47.862847 | orchestrator | Installed 11 packages in 562ms 2026-03-25 04:17:47.862937 | orchestrator | + ansible==11.11.0 2026-03-25 04:17:47.862948 | orchestrator | + ansible-core==2.18.15 2026-03-25 04:17:47.862956 | orchestrator | + cffi==2.0.0 2026-03-25 04:17:47.862965 | orchestrator | + cryptography==46.0.5 2026-03-25 04:17:47.862973 | orchestrator | + jinja2==3.1.6 2026-03-25 04:17:47.863042 | orchestrator | + markupsafe==3.0.3 2026-03-25 04:17:47.863053 | orchestrator | + netaddr==1.3.0 2026-03-25 04:17:47.863061 | orchestrator | + packaging==26.0 2026-03-25 04:17:47.863069 | orchestrator | + pycparser==3.0 2026-03-25 04:17:47.863077 | orchestrator | + pyyaml==6.0.3 2026-03-25 04:17:47.863085 | orchestrator | + resolvelib==1.0.1 2026-03-25 04:17:49.158877 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-201792qg6_numb/tmp4evz3qui/ansible-collection-servicesq_9qk2ro'... 2026-03-25 04:17:50.592213 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-25 04:17:50.592288 | orchestrator | Already on 'main' 2026-03-25 04:17:51.145815 | orchestrator | Starting galaxy collection install process 2026-03-25 04:17:51.145919 | orchestrator | Process install dependency map 2026-03-25 04:17:51.145939 | orchestrator | Starting collection install process 2026-03-25 04:17:51.145950 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-03-25 04:17:51.145961 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-03-25 04:17:51.145971 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-25 04:17:51.737742 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-2018196ochlh15/tmp805tti47/ansible-playbooks-managerk0_f0fw2'... 2026-03-25 04:17:52.274303 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-25 04:17:52.274387 | orchestrator | Already on 'main' 2026-03-25 04:17:52.576196 | orchestrator | Starting galaxy collection install process 2026-03-25 04:17:52.576305 | orchestrator | Process install dependency map 2026-03-25 04:17:52.576321 | orchestrator | Starting collection install process 2026-03-25 04:17:52.576334 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-03-25 04:17:52.576348 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-03-25 04:17:52.576361 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-03-25 04:17:53.301427 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-25 04:17:53.301547 | orchestrator | -vvvv to see details 2026-03-25 04:17:53.790252 | orchestrator | 2026-03-25 04:17:53.790355 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-03-25 04:17:53.790368 | orchestrator | 2026-03-25 04:17:53.790375 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-25 04:17:57.860104 | orchestrator | ok: [testbed-manager] 2026-03-25 04:17:57.860210 | orchestrator | 2026-03-25 04:17:57.860221 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-25 04:17:57.923374 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 04:17:57.923463 | orchestrator | 2026-03-25 04:17:57.923493 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-25 04:17:59.962246 | orchestrator | ok: [testbed-manager] 2026-03-25 04:17:59.962334 | orchestrator | 2026-03-25 04:17:59.962346 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-25 04:18:00.019733 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:00.019822 | orchestrator | 2026-03-25 04:18:00.019833 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-25 04:18:00.094723 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-25 04:18:00.094795 | orchestrator | 2026-03-25 04:18:00.094801 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-25 04:18:04.412939 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-03-25 04:18:04.413058 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-03-25 04:18:04.413069 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-25 04:18:04.413085 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-03-25 04:18:04.413090 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-25 04:18:04.413095 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-25 04:18:04.413100 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-25 04:18:04.413105 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-03-25 04:18:04.413110 | orchestrator | 2026-03-25 04:18:04.413115 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-25 04:18:05.545884 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:05.546001 | orchestrator | 2026-03-25 04:18:05.546062 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-25 04:18:06.524156 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:06.524264 | orchestrator | 2026-03-25 04:18:06.524281 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-25 04:18:06.608073 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-25 04:18:06.608153 | orchestrator | 2026-03-25 04:18:06.608161 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-25 04:18:08.521937 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-03-25 04:18:08.522169 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-03-25 04:18:08.522187 | orchestrator | 2026-03-25 04:18:08.522199 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-25 04:18:09.492855 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:09.492940 | orchestrator | 2026-03-25 04:18:09.492947 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-25 04:18:09.549249 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:18:09.549354 | orchestrator | 2026-03-25 04:18:09.549366 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-25 04:18:09.630847 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-25 04:18:09.630917 | orchestrator | 2026-03-25 04:18:09.630924 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-25 04:18:10.624856 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:10.624943 | orchestrator | 2026-03-25 04:18:10.624953 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-25 04:18:10.692790 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-25 04:18:10.692879 | orchestrator | 2026-03-25 04:18:10.692891 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-25 04:18:12.616160 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-25 04:18:12.616254 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-25 04:18:12.616265 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:12.616273 | orchestrator | 2026-03-25 04:18:12.616281 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-25 04:18:13.582269 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:13.582360 | orchestrator | 2026-03-25 04:18:13.582369 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-25 04:18:13.639255 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:18:13.639345 | orchestrator | 2026-03-25 04:18:13.639354 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-25 04:18:13.728607 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-25 04:18:13.728677 | orchestrator | 2026-03-25 04:18:13.728684 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-25 04:18:14.479990 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:14.480064 | orchestrator | 2026-03-25 04:18:14.480072 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-25 04:18:15.017940 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:15.018151 | orchestrator | 2026-03-25 04:18:15.018166 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-25 04:18:16.949066 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-03-25 04:18:16.949182 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-03-25 04:18:16.949206 | orchestrator | 2026-03-25 04:18:16.949223 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-25 04:18:18.089354 | orchestrator | changed: [testbed-manager] 2026-03-25 04:18:18.089428 | orchestrator | 2026-03-25 04:18:18.089435 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-25 04:18:18.662774 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:18.662865 | orchestrator | 2026-03-25 04:18:18.662875 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-25 04:18:19.240383 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:19.240454 | orchestrator | 2026-03-25 04:18:19.240478 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-25 04:18:19.289854 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:18:19.289925 | orchestrator | 2026-03-25 04:18:19.289932 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-25 04:18:19.373040 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-25 04:18:19.373111 | orchestrator | 2026-03-25 04:18:19.373118 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-25 04:18:19.429015 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:19.429084 | orchestrator | 2026-03-25 04:18:19.429091 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-25 04:18:22.379759 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-03-25 04:18:22.379837 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-03-25 04:18:22.379845 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-03-25 04:18:22.379850 | orchestrator | 2026-03-25 04:18:22.379855 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-25 04:18:23.392845 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:23.393039 | orchestrator | 2026-03-25 04:18:23.393071 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-25 04:18:24.471758 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:24.471845 | orchestrator | 2026-03-25 04:18:24.471855 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-25 04:18:25.495057 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:25.495137 | orchestrator | 2026-03-25 04:18:25.495145 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-25 04:18:25.570167 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-25 04:18:25.570236 | orchestrator | 2026-03-25 04:18:25.570243 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-25 04:18:25.643429 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:25.643546 | orchestrator | 2026-03-25 04:18:25.643563 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-25 04:18:26.681176 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-03-25 04:18:26.681279 | orchestrator | 2026-03-25 04:18:26.681293 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-25 04:18:26.783332 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-25 04:18:26.783435 | orchestrator | 2026-03-25 04:18:26.783452 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-25 04:18:27.873830 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:27.873919 | orchestrator | 2026-03-25 04:18:27.873929 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-25 04:18:29.121875 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:29.122117 | orchestrator | 2026-03-25 04:18:29.122139 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-25 04:18:29.195099 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:18:29.195170 | orchestrator | 2026-03-25 04:18:29.195177 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-25 04:18:29.279056 | orchestrator | ok: [testbed-manager] 2026-03-25 04:18:29.279146 | orchestrator | 2026-03-25 04:18:29.279157 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-25 04:18:30.729834 | orchestrator | changed: [testbed-manager] 2026-03-25 04:18:30.729905 | orchestrator | 2026-03-25 04:18:30.729911 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-25 04:19:36.903530 | orchestrator | changed: [testbed-manager] 2026-03-25 04:19:36.903639 | orchestrator | 2026-03-25 04:19:36.903654 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-25 04:19:38.253206 | orchestrator | ok: [testbed-manager] 2026-03-25 04:19:38.253285 | orchestrator | 2026-03-25 04:19:38.253292 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-25 04:19:38.303596 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:19:38.303686 | orchestrator | 2026-03-25 04:19:38.303696 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-25 04:19:39.259205 | orchestrator | ok: [testbed-manager] 2026-03-25 04:19:39.259295 | orchestrator | 2026-03-25 04:19:39.259306 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-25 04:19:39.327470 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:19:39.327565 | orchestrator | 2026-03-25 04:19:39.327578 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-25 04:19:39.327589 | orchestrator | 2026-03-25 04:19:39.327594 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-25 04:19:53.888109 | orchestrator | changed: [testbed-manager] 2026-03-25 04:19:53.888231 | orchestrator | 2026-03-25 04:19:53.888247 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-25 04:20:53.961307 | orchestrator | Pausing for 60 seconds 2026-03-25 04:20:53.961440 | orchestrator | changed: [testbed-manager] 2026-03-25 04:20:53.961456 | orchestrator | 2026-03-25 04:20:53.961469 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-03-25 04:20:54.011776 | orchestrator | ok: [testbed-manager] 2026-03-25 04:20:54.011913 | orchestrator | 2026-03-25 04:20:54.011937 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-25 04:20:58.089585 | orchestrator | changed: [testbed-manager] 2026-03-25 04:20:58.089686 | orchestrator | 2026-03-25 04:20:58.089701 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-25 04:22:00.838074 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-25 04:22:00.838199 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-25 04:22:00.838215 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-25 04:22:00.838226 | orchestrator | changed: [testbed-manager] 2026-03-25 04:22:00.838236 | orchestrator | 2026-03-25 04:22:00.838245 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-25 04:22:14.670720 | orchestrator | changed: [testbed-manager] 2026-03-25 04:22:14.671516 | orchestrator | 2026-03-25 04:22:14.671550 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-25 04:22:14.763609 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-25 04:22:14.763746 | orchestrator | 2026-03-25 04:22:14.763762 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-25 04:22:14.763773 | orchestrator | 2026-03-25 04:22:14.763783 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-25 04:22:14.834647 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:22:14.834717 | orchestrator | 2026-03-25 04:22:14.834723 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-25 04:22:14.913121 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-25 04:22:14.913222 | orchestrator | 2026-03-25 04:22:14.913236 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-25 04:22:16.038987 | orchestrator | changed: [testbed-manager] 2026-03-25 04:22:16.039090 | orchestrator | 2026-03-25 04:22:16.039106 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-25 04:22:19.488138 | orchestrator | ok: [testbed-manager] 2026-03-25 04:22:19.488218 | orchestrator | 2026-03-25 04:22:19.488228 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-25 04:22:19.582781 | orchestrator | ok: [testbed-manager] => { 2026-03-25 04:22:19.582910 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-25 04:22:19.582925 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-25 04:22:19.582938 | orchestrator | "Checking running containers against expected versions...", 2026-03-25 04:22:19.582952 | orchestrator | "", 2026-03-25 04:22:19.582965 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-25 04:22:19.582977 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-25 04:22:19.582990 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583002 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-25 04:22:19.583014 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583027 | orchestrator | "", 2026-03-25 04:22:19.583040 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-25 04:22:19.583054 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-25 04:22:19.583066 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583079 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-25 04:22:19.583093 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583107 | orchestrator | "", 2026-03-25 04:22:19.583120 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-25 04:22:19.583128 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-25 04:22:19.583135 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583143 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-25 04:22:19.583150 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583157 | orchestrator | "", 2026-03-25 04:22:19.583164 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-25 04:22:19.583172 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-25 04:22:19.583179 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583186 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-25 04:22:19.583194 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583201 | orchestrator | "", 2026-03-25 04:22:19.583208 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-25 04:22:19.583215 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-25 04:22:19.583223 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583230 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-25 04:22:19.583237 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583244 | orchestrator | "", 2026-03-25 04:22:19.583251 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-25 04:22:19.583277 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583285 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583292 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583299 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583307 | orchestrator | "", 2026-03-25 04:22:19.583315 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-25 04:22:19.583324 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-25 04:22:19.583332 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583340 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-25 04:22:19.583349 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583362 | orchestrator | "", 2026-03-25 04:22:19.583373 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-25 04:22:19.583387 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-25 04:22:19.583401 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583424 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-25 04:22:19.583433 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583442 | orchestrator | "", 2026-03-25 04:22:19.583450 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-25 04:22:19.583458 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-25 04:22:19.583466 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583474 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-25 04:22:19.583482 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583513 | orchestrator | "", 2026-03-25 04:22:19.583526 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-25 04:22:19.583534 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-25 04:22:19.583543 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583551 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-25 04:22:19.583559 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583567 | orchestrator | "", 2026-03-25 04:22:19.583575 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-25 04:22:19.583585 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583599 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583618 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583634 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583647 | orchestrator | "", 2026-03-25 04:22:19.583660 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-25 04:22:19.583672 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583683 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583695 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583707 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583719 | orchestrator | "", 2026-03-25 04:22:19.583732 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-25 04:22:19.583743 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583755 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583765 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583777 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583789 | orchestrator | "", 2026-03-25 04:22:19.583800 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-25 04:22:19.583869 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583882 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583892 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583926 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.583939 | orchestrator | "", 2026-03-25 04:22:19.583951 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-25 04:22:19.583963 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.583987 | orchestrator | " Enabled: true", 2026-03-25 04:22:19.583999 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-25 04:22:19.584009 | orchestrator | " Status: ✅ MATCH", 2026-03-25 04:22:19.584020 | orchestrator | "", 2026-03-25 04:22:19.584033 | orchestrator | "=== Summary ===", 2026-03-25 04:22:19.584059 | orchestrator | "Errors (version mismatches): 0", 2026-03-25 04:22:19.584081 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-25 04:22:19.584093 | orchestrator | "", 2026-03-25 04:22:19.584104 | orchestrator | "✅ All running containers match expected versions!" 2026-03-25 04:22:19.584116 | orchestrator | ] 2026-03-25 04:22:19.584127 | orchestrator | } 2026-03-25 04:22:19.584138 | orchestrator | 2026-03-25 04:22:19.584150 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-25 04:22:19.646411 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:22:19.646513 | orchestrator | 2026-03-25 04:22:19.646524 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:22:19.646534 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-03-25 04:22:19.646542 | orchestrator | 2026-03-25 04:22:32.626236 | orchestrator | 2026-03-25 04:22:32 | INFO  | Task ec31fe74-4eb1-4be5-a8bd-f5bbd23602ad (sync inventory) is running in background. Output coming soon. 2026-03-25 04:23:07.484950 | orchestrator | 2026-03-25 04:22:34 | INFO  | Starting group_vars file reorganization 2026-03-25 04:23:07.485042 | orchestrator | 2026-03-25 04:22:34 | INFO  | Moved 0 file(s) to their respective directories 2026-03-25 04:23:07.485051 | orchestrator | 2026-03-25 04:22:34 | INFO  | Group_vars file reorganization completed 2026-03-25 04:23:07.485072 | orchestrator | 2026-03-25 04:22:37 | INFO  | Starting variable preparation from inventory 2026-03-25 04:23:07.485077 | orchestrator | 2026-03-25 04:22:41 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-25 04:23:07.485082 | orchestrator | 2026-03-25 04:22:41 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-25 04:23:07.485086 | orchestrator | 2026-03-25 04:22:41 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-25 04:23:07.485091 | orchestrator | 2026-03-25 04:22:41 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-25 04:23:07.485095 | orchestrator | 2026-03-25 04:22:41 | INFO  | Variable preparation completed 2026-03-25 04:23:07.485099 | orchestrator | 2026-03-25 04:22:43 | INFO  | Starting inventory overwrite handling 2026-03-25 04:23:07.485103 | orchestrator | 2026-03-25 04:22:43 | INFO  | Handling group overwrites in 99-overwrite 2026-03-25 04:23:07.485107 | orchestrator | 2026-03-25 04:22:43 | INFO  | Removing group frr:children from 60-generic 2026-03-25 04:23:07.485111 | orchestrator | 2026-03-25 04:22:43 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-25 04:23:07.485115 | orchestrator | 2026-03-25 04:22:43 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-25 04:23:07.485119 | orchestrator | 2026-03-25 04:22:43 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-25 04:23:07.485123 | orchestrator | 2026-03-25 04:22:43 | INFO  | Handling group overwrites in 20-roles 2026-03-25 04:23:07.485126 | orchestrator | 2026-03-25 04:22:43 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-25 04:23:07.485130 | orchestrator | 2026-03-25 04:22:43 | INFO  | Removed 5 group(s) in total 2026-03-25 04:23:07.485134 | orchestrator | 2026-03-25 04:22:43 | INFO  | Inventory overwrite handling completed 2026-03-25 04:23:07.485138 | orchestrator | 2026-03-25 04:22:45 | INFO  | Starting merge of inventory files 2026-03-25 04:23:07.485144 | orchestrator | 2026-03-25 04:22:45 | INFO  | Inventory files merged successfully 2026-03-25 04:23:07.485171 | orchestrator | 2026-03-25 04:22:51 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-25 04:23:07.485178 | orchestrator | 2026-03-25 04:23:05 | INFO  | Successfully wrote ClusterShell configuration 2026-03-25 04:23:07.929180 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-25 04:23:07.929261 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-25 04:23:07.929271 | orchestrator | + local max_attempts=60 2026-03-25 04:23:07.929279 | orchestrator | + local name=kolla-ansible 2026-03-25 04:23:07.929286 | orchestrator | + local attempt_num=1 2026-03-25 04:23:07.929416 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-25 04:23:07.968131 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-25 04:23:07.968220 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-25 04:23:07.968230 | orchestrator | + local max_attempts=60 2026-03-25 04:23:07.968235 | orchestrator | + local name=osism-ansible 2026-03-25 04:23:07.968239 | orchestrator | + local attempt_num=1 2026-03-25 04:23:07.968243 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-25 04:23:07.996873 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-25 04:23:07.996962 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-25 04:23:08.231722 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-25 04:23:08.231833 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-25 04:23:08.231843 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-25 04:23:08.231850 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-25 04:23:08.231860 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-03-25 04:23:08.231866 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-03-25 04:23:08.231871 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-03-25 04:23:08.231876 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-03-25 04:23:08.231882 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 5 seconds ago 2026-03-25 04:23:08.231887 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-03-25 04:23:08.231892 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-03-25 04:23:08.231897 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-03-25 04:23:08.231902 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-25 04:23:08.231925 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-03-25 04:23:08.231931 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-03-25 04:23:08.231936 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-03-25 04:23:08.236426 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-03-25 04:23:08.236499 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-03-25 04:23:08.236509 | orchestrator | + osism apply facts 2026-03-25 04:23:21.038194 | orchestrator | 2026-03-25 04:23:21 | INFO  | Task 16696da6-a6b8-43a6-ac83-cf3d909d4850 (facts) was prepared for execution. 2026-03-25 04:23:21.038313 | orchestrator | 2026-03-25 04:23:21 | INFO  | It takes a moment until task 16696da6-a6b8-43a6-ac83-cf3d909d4850 (facts) has been started and output is visible here. 2026-03-25 04:23:42.459284 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-25 04:23:42.459405 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-25 04:23:42.459426 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-25 04:23:42.459433 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-25 04:23:42.459447 | orchestrator | 2026-03-25 04:23:42.459483 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-25 04:23:42.459492 | orchestrator | 2026-03-25 04:23:42.459499 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-25 04:23:42.459507 | orchestrator | Wednesday 25 March 2026 04:23:28 +0000 (0:00:02.045) 0:00:02.045 ******* 2026-03-25 04:23:42.459515 | orchestrator | ok: [testbed-manager] 2026-03-25 04:23:42.459523 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:23:42.459530 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:23:42.459537 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:23:42.459543 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:23:42.459549 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:23:42.459555 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:23:42.459562 | orchestrator | 2026-03-25 04:23:42.459569 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-25 04:23:42.459575 | orchestrator | Wednesday 25 March 2026 04:23:31 +0000 (0:00:02.792) 0:00:04.838 ******* 2026-03-25 04:23:42.459582 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:23:42.459589 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:23:42.459614 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:23:42.459621 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:23:42.459626 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:23:42.459674 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:23:42.459691 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:23:42.459698 | orchestrator | 2026-03-25 04:23:42.459705 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-25 04:23:42.459712 | orchestrator | 2026-03-25 04:23:42.459719 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-25 04:23:42.459726 | orchestrator | Wednesday 25 March 2026 04:23:33 +0000 (0:00:02.113) 0:00:06.951 ******* 2026-03-25 04:23:42.459732 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:23:42.459739 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:23:42.459745 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:23:42.459770 | orchestrator | ok: [testbed-manager] 2026-03-25 04:23:42.459799 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:23:42.459806 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:23:42.459812 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:23:42.459818 | orchestrator | 2026-03-25 04:23:42.459825 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-25 04:23:42.459831 | orchestrator | 2026-03-25 04:23:42.459838 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-25 04:23:42.459845 | orchestrator | Wednesday 25 March 2026 04:23:39 +0000 (0:00:06.301) 0:00:13.252 ******* 2026-03-25 04:23:42.459852 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:23:42.459859 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:23:42.459865 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:23:42.459872 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:23:42.459888 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:23:42.459895 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:23:42.459908 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:23:42.459914 | orchestrator | 2026-03-25 04:23:42.459921 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:23:42.459928 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:23:42.459937 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:23:42.459944 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:23:42.459951 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:23:42.459958 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:23:42.459964 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:23:42.459970 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:23:42.459978 | orchestrator | 2026-03-25 04:23:42.459985 | orchestrator | 2026-03-25 04:23:42.459992 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:23:42.459999 | orchestrator | Wednesday 25 March 2026 04:23:41 +0000 (0:00:01.950) 0:00:15.203 ******* 2026-03-25 04:23:42.460006 | orchestrator | =============================================================================== 2026-03-25 04:23:42.460013 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.30s 2026-03-25 04:23:42.460020 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.79s 2026-03-25 04:23:42.460026 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.11s 2026-03-25 04:23:42.460032 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.95s 2026-03-25 04:23:42.928209 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-25 04:23:43.004411 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 04:23:43.004545 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-25 04:23:43.038055 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-03-25 04:23:43.038129 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-03-25 04:23:43.043190 | orchestrator | + set -e 2026-03-25 04:23:43.043270 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-03-25 04:23:43.043280 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-25 04:23:43.051862 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-03-25 04:23:43.058828 | orchestrator | 2026-03-25 04:23:43.058897 | orchestrator | # UPGRADE SERVICES 2026-03-25 04:23:43.058932 | orchestrator | 2026-03-25 04:23:43.058939 | orchestrator | + set -e 2026-03-25 04:23:43.058945 | orchestrator | + echo 2026-03-25 04:23:43.058952 | orchestrator | + echo '# UPGRADE SERVICES' 2026-03-25 04:23:43.058958 | orchestrator | + echo 2026-03-25 04:23:43.058964 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 04:23:43.060155 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 04:23:43.060206 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 04:23:43.060211 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 04:23:43.060215 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 04:23:43.060219 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 04:23:43.060225 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 04:23:43.060229 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 04:23:43.060233 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 04:23:43.060237 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 04:23:43.060241 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 04:23:43.060245 | orchestrator | ++ export ARA=false 2026-03-25 04:23:43.060249 | orchestrator | ++ ARA=false 2026-03-25 04:23:43.060253 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 04:23:43.060257 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 04:23:43.060261 | orchestrator | ++ export TEMPEST=false 2026-03-25 04:23:43.060265 | orchestrator | ++ TEMPEST=false 2026-03-25 04:23:43.060269 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 04:23:43.060272 | orchestrator | ++ IS_ZUUL=true 2026-03-25 04:23:43.060276 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:23:43.060280 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:23:43.060283 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 04:23:43.060287 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 04:23:43.060290 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 04:23:43.060294 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 04:23:43.060298 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 04:23:43.060301 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 04:23:43.060305 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 04:23:43.060309 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 04:23:43.060313 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-25 04:23:43.060317 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-25 04:23:43.060334 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-03-25 04:23:43.060338 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-03-25 04:23:43.060341 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-25 04:23:43.067505 | orchestrator | + set -e 2026-03-25 04:23:43.067564 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 04:23:43.068236 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 04:23:43.068247 | orchestrator | ++ INTERACTIVE=false 2026-03-25 04:23:43.068252 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 04:23:43.068257 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 04:23:43.068261 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 04:23:43.068266 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 04:23:43.068271 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 04:23:43.068275 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 04:23:43.068280 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 04:23:43.068285 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 04:23:43.068289 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 04:23:43.068294 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 04:23:43.068298 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 04:23:43.068303 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 04:23:43.068318 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 04:23:43.068323 | orchestrator | ++ export ARA=false 2026-03-25 04:23:43.068327 | orchestrator | ++ ARA=false 2026-03-25 04:23:43.068331 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 04:23:43.068335 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 04:23:43.068340 | orchestrator | ++ export TEMPEST=false 2026-03-25 04:23:43.068344 | orchestrator | ++ TEMPEST=false 2026-03-25 04:23:43.068348 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 04:23:43.068353 | orchestrator | ++ IS_ZUUL=true 2026-03-25 04:23:43.068392 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:23:43.068398 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 04:23:43.068402 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 04:23:43.068407 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 04:23:43.068566 | orchestrator | 2026-03-25 04:23:43.068574 | orchestrator | # PULL IMAGES 2026-03-25 04:23:43.068578 | orchestrator | 2026-03-25 04:23:43.068581 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 04:23:43.068585 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 04:23:43.068589 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 04:23:43.068593 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 04:23:43.068615 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 04:23:43.068619 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 04:23:43.068623 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-25 04:23:43.068626 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-25 04:23:43.068630 | orchestrator | + echo 2026-03-25 04:23:43.068634 | orchestrator | + echo '# PULL IMAGES' 2026-03-25 04:23:43.068638 | orchestrator | + echo 2026-03-25 04:23:43.069513 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-25 04:23:43.114160 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 04:23:43.114233 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-25 04:23:45.952851 | orchestrator | 2026-03-25 04:23:45 | INFO  | Trying to run play pull-images in environment custom 2026-03-25 04:23:56.111027 | orchestrator | 2026-03-25 04:23:56 | INFO  | Task fe0ba42c-b021-4913-96ca-5738d7b326a7 (pull-images) was prepared for execution. 2026-03-25 04:23:56.111126 | orchestrator | 2026-03-25 04:23:56 | INFO  | Task fe0ba42c-b021-4913-96ca-5738d7b326a7 is running in background. No more output. Check ARA for logs. 2026-03-25 04:23:56.578685 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-03-25 04:23:56.587037 | orchestrator | + set -e 2026-03-25 04:23:56.587122 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 04:23:56.587131 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 04:23:56.587136 | orchestrator | ++ INTERACTIVE=false 2026-03-25 04:23:56.587140 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 04:23:56.587145 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 04:23:56.587149 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-25 04:23:56.588962 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-25 04:23:56.598610 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-25 04:23:56.598687 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-25 04:23:56.599049 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-03-25 04:23:56.650202 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-25 04:23:56.650287 | orchestrator | + osism apply frr 2026-03-25 04:24:09.174152 | orchestrator | 2026-03-25 04:24:09 | INFO  | Task ffc6d695-0c6a-4f92-bc5d-44f59d407830 (frr) was prepared for execution. 2026-03-25 04:24:09.174225 | orchestrator | 2026-03-25 04:24:09 | INFO  | It takes a moment until task ffc6d695-0c6a-4f92-bc5d-44f59d407830 (frr) has been started and output is visible here. 2026-03-25 04:24:32.722298 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-25 04:24:32.722404 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-25 04:24:32.722427 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-25 04:24:32.722435 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-25 04:24:32.722451 | orchestrator | 2026-03-25 04:24:32.722459 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-25 04:24:32.722466 | orchestrator | 2026-03-25 04:24:32.722474 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-25 04:24:32.722482 | orchestrator | Wednesday 25 March 2026 04:24:16 +0000 (0:00:01.635) 0:00:01.635 ******* 2026-03-25 04:24:32.722490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 04:24:32.722499 | orchestrator | 2026-03-25 04:24:32.722506 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-25 04:24:32.722514 | orchestrator | Wednesday 25 March 2026 04:24:17 +0000 (0:00:01.342) 0:00:02.978 ******* 2026-03-25 04:24:32.722522 | orchestrator | ok: [testbed-manager] 2026-03-25 04:24:32.722531 | orchestrator | 2026-03-25 04:24:32.722539 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-25 04:24:32.722547 | orchestrator | Wednesday 25 March 2026 04:24:19 +0000 (0:00:02.002) 0:00:04.980 ******* 2026-03-25 04:24:32.722580 | orchestrator | ok: [testbed-manager] 2026-03-25 04:24:32.722588 | orchestrator | 2026-03-25 04:24:32.722596 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-25 04:24:32.722604 | orchestrator | Wednesday 25 March 2026 04:24:21 +0000 (0:00:02.287) 0:00:07.268 ******* 2026-03-25 04:24:32.722612 | orchestrator | ok: [testbed-manager] 2026-03-25 04:24:32.722620 | orchestrator | 2026-03-25 04:24:32.722628 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-25 04:24:32.722637 | orchestrator | Wednesday 25 March 2026 04:24:22 +0000 (0:00:01.010) 0:00:08.279 ******* 2026-03-25 04:24:32.722644 | orchestrator | ok: [testbed-manager] 2026-03-25 04:24:32.722653 | orchestrator | 2026-03-25 04:24:32.722660 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-25 04:24:32.722668 | orchestrator | Wednesday 25 March 2026 04:24:23 +0000 (0:00:00.995) 0:00:09.274 ******* 2026-03-25 04:24:32.722675 | orchestrator | ok: [testbed-manager] 2026-03-25 04:24:32.722683 | orchestrator | 2026-03-25 04:24:32.722690 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-25 04:24:32.722698 | orchestrator | Wednesday 25 March 2026 04:24:25 +0000 (0:00:01.740) 0:00:11.014 ******* 2026-03-25 04:24:32.722705 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:24:32.722794 | orchestrator | 2026-03-25 04:24:32.722807 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-25 04:24:32.722815 | orchestrator | Wednesday 25 March 2026 04:24:25 +0000 (0:00:00.161) 0:00:11.176 ******* 2026-03-25 04:24:32.722824 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:24:32.722832 | orchestrator | 2026-03-25 04:24:32.722859 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-25 04:24:32.722871 | orchestrator | Wednesday 25 March 2026 04:24:26 +0000 (0:00:00.219) 0:00:11.396 ******* 2026-03-25 04:24:32.722879 | orchestrator | ok: [testbed-manager] 2026-03-25 04:24:32.722887 | orchestrator | 2026-03-25 04:24:32.722895 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-25 04:24:32.722903 | orchestrator | Wednesday 25 March 2026 04:24:27 +0000 (0:00:01.072) 0:00:12.469 ******* 2026-03-25 04:24:32.722911 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-25 04:24:32.722919 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-25 04:24:32.722930 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-25 04:24:32.722939 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-25 04:24:32.722947 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-25 04:24:32.722956 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-25 04:24:32.722964 | orchestrator | 2026-03-25 04:24:32.722972 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-25 04:24:32.722979 | orchestrator | Wednesday 25 March 2026 04:24:30 +0000 (0:00:03.099) 0:00:15.568 ******* 2026-03-25 04:24:32.722988 | orchestrator | ok: [testbed-manager] 2026-03-25 04:24:32.722999 | orchestrator | 2026-03-25 04:24:32.723008 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:24:32.723021 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 04:24:32.723033 | orchestrator | 2026-03-25 04:24:32.723042 | orchestrator | 2026-03-25 04:24:32.723051 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:24:32.723060 | orchestrator | Wednesday 25 March 2026 04:24:32 +0000 (0:00:02.064) 0:00:17.633 ******* 2026-03-25 04:24:32.723071 | orchestrator | =============================================================================== 2026-03-25 04:24:32.723081 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.10s 2026-03-25 04:24:32.723102 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.29s 2026-03-25 04:24:32.723132 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.06s 2026-03-25 04:24:32.723143 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.00s 2026-03-25 04:24:32.723151 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.74s 2026-03-25 04:24:32.723161 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.34s 2026-03-25 04:24:32.723170 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.07s 2026-03-25 04:24:32.723178 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.01s 2026-03-25 04:24:32.723186 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.00s 2026-03-25 04:24:32.723197 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.22s 2026-03-25 04:24:32.723207 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-03-25 04:24:33.152985 | orchestrator | + osism apply kubernetes 2026-03-25 04:24:35.635947 | orchestrator | 2026-03-25 04:24:35 | INFO  | Task 09770380-9870-4188-ba06-fe8b97624986 (kubernetes) was prepared for execution. 2026-03-25 04:24:35.636018 | orchestrator | 2026-03-25 04:24:35 | INFO  | It takes a moment until task 09770380-9870-4188-ba06-fe8b97624986 (kubernetes) has been started and output is visible here. 2026-03-25 04:25:26.026899 | orchestrator | 2026-03-25 04:25:26.027004 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-25 04:25:26.027034 | orchestrator | 2026-03-25 04:25:26.027056 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-25 04:25:26.027067 | orchestrator | Wednesday 25 March 2026 04:24:44 +0000 (0:00:03.199) 0:00:03.199 ******* 2026-03-25 04:25:26.027077 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:25:26.027088 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:25:26.027098 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:25:26.027108 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:25:26.027118 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:25:26.027127 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:25:26.027137 | orchestrator | 2026-03-25 04:25:26.027146 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-25 04:25:26.027156 | orchestrator | Wednesday 25 March 2026 04:24:49 +0000 (0:00:04.634) 0:00:07.834 ******* 2026-03-25 04:25:26.027166 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.027177 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.027187 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.027197 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.027207 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.027217 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.027227 | orchestrator | 2026-03-25 04:25:26.027236 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-25 04:25:26.027246 | orchestrator | Wednesday 25 March 2026 04:24:51 +0000 (0:00:02.149) 0:00:09.983 ******* 2026-03-25 04:25:26.027256 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.027266 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.027275 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.027285 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.027294 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.027304 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.027313 | orchestrator | 2026-03-25 04:25:26.027323 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-25 04:25:26.027333 | orchestrator | Wednesday 25 March 2026 04:24:53 +0000 (0:00:02.072) 0:00:12.056 ******* 2026-03-25 04:25:26.027343 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:25:26.027361 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:25:26.027378 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:25:26.027396 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:25:26.027498 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:25:26.027518 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:25:26.027529 | orchestrator | 2026-03-25 04:25:26.027540 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-25 04:25:26.027553 | orchestrator | Wednesday 25 March 2026 04:24:56 +0000 (0:00:03.286) 0:00:15.343 ******* 2026-03-25 04:25:26.027564 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:25:26.027575 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:25:26.027586 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:25:26.027598 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:25:26.027609 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:25:26.027620 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:25:26.027631 | orchestrator | 2026-03-25 04:25:26.027642 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-25 04:25:26.027654 | orchestrator | Wednesday 25 March 2026 04:24:59 +0000 (0:00:02.696) 0:00:18.040 ******* 2026-03-25 04:25:26.027665 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:25:26.027703 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:25:26.027716 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:25:26.027727 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:25:26.027738 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:25:26.027747 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:25:26.027757 | orchestrator | 2026-03-25 04:25:26.027766 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-25 04:25:26.027776 | orchestrator | Wednesday 25 March 2026 04:25:01 +0000 (0:00:02.425) 0:00:20.465 ******* 2026-03-25 04:25:26.027791 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.027805 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.027821 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.027847 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.027863 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.027877 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.027891 | orchestrator | 2026-03-25 04:25:26.027906 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-25 04:25:26.027922 | orchestrator | Wednesday 25 March 2026 04:25:03 +0000 (0:00:02.198) 0:00:22.663 ******* 2026-03-25 04:25:26.027937 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.027952 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.027968 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.027983 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.028000 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.028017 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.028033 | orchestrator | 2026-03-25 04:25:26.028049 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-25 04:25:26.028059 | orchestrator | Wednesday 25 March 2026 04:25:05 +0000 (0:00:01.971) 0:00:24.635 ******* 2026-03-25 04:25:26.028069 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 04:25:26.028078 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 04:25:26.028088 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.028097 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 04:25:26.028107 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 04:25:26.028116 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.028125 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 04:25:26.028135 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 04:25:26.028144 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.028154 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 04:25:26.028163 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 04:25:26.028173 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.028214 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 04:25:26.028225 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 04:25:26.028235 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.028244 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-25 04:25:26.028254 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-25 04:25:26.028263 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.028273 | orchestrator | 2026-03-25 04:25:26.028282 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-25 04:25:26.028292 | orchestrator | Wednesday 25 March 2026 04:25:08 +0000 (0:00:02.253) 0:00:26.889 ******* 2026-03-25 04:25:26.028301 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.028311 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.028320 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.028330 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.028339 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.028349 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.028359 | orchestrator | 2026-03-25 04:25:26.028368 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-25 04:25:26.028379 | orchestrator | Wednesday 25 March 2026 04:25:10 +0000 (0:00:02.613) 0:00:29.503 ******* 2026-03-25 04:25:26.028389 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:25:26.028398 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:25:26.028409 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:25:26.028426 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:25:26.028441 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:25:26.028456 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:25:26.028473 | orchestrator | 2026-03-25 04:25:26.028491 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-25 04:25:26.028507 | orchestrator | Wednesday 25 March 2026 04:25:12 +0000 (0:00:02.221) 0:00:31.724 ******* 2026-03-25 04:25:26.028524 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:25:26.028540 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:25:26.028555 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:25:26.028564 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:25:26.028573 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:25:26.028583 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:25:26.028592 | orchestrator | 2026-03-25 04:25:26.028602 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-25 04:25:26.028611 | orchestrator | Wednesday 25 March 2026 04:25:16 +0000 (0:00:03.741) 0:00:35.466 ******* 2026-03-25 04:25:26.028621 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.028630 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.028640 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.028649 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.028659 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.028668 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.028702 | orchestrator | 2026-03-25 04:25:26.028713 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-25 04:25:26.028722 | orchestrator | Wednesday 25 March 2026 04:25:18 +0000 (0:00:02.238) 0:00:37.704 ******* 2026-03-25 04:25:26.028732 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.028741 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.028751 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.028760 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.028774 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.028783 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.028793 | orchestrator | 2026-03-25 04:25:26.028803 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-25 04:25:26.028814 | orchestrator | Wednesday 25 March 2026 04:25:21 +0000 (0:00:02.540) 0:00:40.244 ******* 2026-03-25 04:25:26.028832 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.028842 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.028851 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.028861 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.028871 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.028880 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.028889 | orchestrator | 2026-03-25 04:25:26.028899 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-25 04:25:26.028908 | orchestrator | Wednesday 25 March 2026 04:25:23 +0000 (0:00:01.871) 0:00:42.116 ******* 2026-03-25 04:25:26.028918 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-25 04:25:26.028928 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-25 04:25:26.028937 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.028947 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-25 04:25:26.028956 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-25 04:25:26.028966 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.028975 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-25 04:25:26.028984 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-25 04:25:26.028994 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:25:26.029003 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-25 04:25:26.029013 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-25 04:25:26.029022 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:25:26.029032 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-25 04:25:26.029041 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-25 04:25:26.029051 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:25:26.029060 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-25 04:25:26.029069 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-25 04:25:26.029079 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:25:26.029088 | orchestrator | 2026-03-25 04:25:26.029098 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-25 04:25:26.029107 | orchestrator | Wednesday 25 March 2026 04:25:25 +0000 (0:00:02.208) 0:00:44.325 ******* 2026-03-25 04:25:26.029117 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:25:26.029127 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:25:26.029144 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:27:24.191463 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:27:24.191567 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:27:24.191575 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:27:24.191581 | orchestrator | 2026-03-25 04:27:24.191590 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-25 04:27:24.191674 | orchestrator | Wednesday 25 March 2026 04:25:27 +0000 (0:00:01.960) 0:00:46.285 ******* 2026-03-25 04:27:24.191681 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:27:24.191687 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:27:24.191694 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:27:24.191719 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:27:24.191726 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:27:24.191735 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:27:24.191741 | orchestrator | 2026-03-25 04:27:24.191746 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-25 04:27:24.191752 | orchestrator | 2026-03-25 04:27:24.191758 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-25 04:27:24.191765 | orchestrator | Wednesday 25 March 2026 04:25:30 +0000 (0:00:03.089) 0:00:49.375 ******* 2026-03-25 04:27:24.191771 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.191778 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.191784 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.191790 | orchestrator | 2026-03-25 04:27:24.191796 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-25 04:27:24.191823 | orchestrator | Wednesday 25 March 2026 04:25:32 +0000 (0:00:01.984) 0:00:51.359 ******* 2026-03-25 04:27:24.191829 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.191835 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.191841 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.191846 | orchestrator | 2026-03-25 04:27:24.191852 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-25 04:27:24.191857 | orchestrator | Wednesday 25 March 2026 04:25:34 +0000 (0:00:02.223) 0:00:53.582 ******* 2026-03-25 04:27:24.191863 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:27:24.191868 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:27:24.191873 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:27:24.191878 | orchestrator | 2026-03-25 04:27:24.191883 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-25 04:27:24.191890 | orchestrator | Wednesday 25 March 2026 04:25:37 +0000 (0:00:02.298) 0:00:55.881 ******* 2026-03-25 04:27:24.191895 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.191901 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.191907 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.191913 | orchestrator | 2026-03-25 04:27:24.191918 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-25 04:27:24.191924 | orchestrator | Wednesday 25 March 2026 04:25:39 +0000 (0:00:02.240) 0:00:58.122 ******* 2026-03-25 04:27:24.191930 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:27:24.191936 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:27:24.191943 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:27:24.191949 | orchestrator | 2026-03-25 04:27:24.191955 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-25 04:27:24.191961 | orchestrator | Wednesday 25 March 2026 04:25:40 +0000 (0:00:01.555) 0:00:59.678 ******* 2026-03-25 04:27:24.191966 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.191971 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.191977 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.191983 | orchestrator | 2026-03-25 04:27:24.191989 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-25 04:27:24.191996 | orchestrator | Wednesday 25 March 2026 04:25:42 +0000 (0:00:02.107) 0:01:01.785 ******* 2026-03-25 04:27:24.192001 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.192007 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.192013 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.192019 | orchestrator | 2026-03-25 04:27:24.192025 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-25 04:27:24.192031 | orchestrator | Wednesday 25 March 2026 04:25:45 +0000 (0:00:02.321) 0:01:04.106 ******* 2026-03-25 04:27:24.192037 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:27:24.192043 | orchestrator | 2026-03-25 04:27:24.192049 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-25 04:27:24.192055 | orchestrator | Wednesday 25 March 2026 04:25:47 +0000 (0:00:02.125) 0:01:06.232 ******* 2026-03-25 04:27:24.192061 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.192068 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.192075 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.192081 | orchestrator | 2026-03-25 04:27:24.192088 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-25 04:27:24.192094 | orchestrator | Wednesday 25 March 2026 04:25:50 +0000 (0:00:02.708) 0:01:08.941 ******* 2026-03-25 04:27:24.192101 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:27:24.192107 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.192114 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:27:24.192121 | orchestrator | 2026-03-25 04:27:24.192140 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-25 04:27:24.192147 | orchestrator | Wednesday 25 March 2026 04:25:52 +0000 (0:00:01.937) 0:01:10.878 ******* 2026-03-25 04:27:24.192160 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:27:24.192167 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:27:24.192182 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:27:24.192189 | orchestrator | 2026-03-25 04:27:24.192195 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-25 04:27:24.192202 | orchestrator | Wednesday 25 March 2026 04:25:53 +0000 (0:00:01.892) 0:01:12.771 ******* 2026-03-25 04:27:24.192208 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:27:24.192215 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:27:24.192222 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:27:24.192228 | orchestrator | 2026-03-25 04:27:24.192234 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-25 04:27:24.192240 | orchestrator | Wednesday 25 March 2026 04:25:56 +0000 (0:00:02.574) 0:01:15.346 ******* 2026-03-25 04:27:24.192246 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:27:24.192252 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:27:24.192278 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:27:24.192284 | orchestrator | 2026-03-25 04:27:24.192290 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-25 04:27:24.192295 | orchestrator | Wednesday 25 March 2026 04:25:58 +0000 (0:00:01.555) 0:01:16.901 ******* 2026-03-25 04:27:24.192301 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:27:24.192307 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:27:24.192313 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:27:24.192319 | orchestrator | 2026-03-25 04:27:24.192325 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-25 04:27:24.192330 | orchestrator | Wednesday 25 March 2026 04:25:59 +0000 (0:00:01.824) 0:01:18.725 ******* 2026-03-25 04:27:24.192335 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:27:24.192341 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:27:24.192347 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:27:24.192353 | orchestrator | 2026-03-25 04:27:24.192358 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-25 04:27:24.192364 | orchestrator | Wednesday 25 March 2026 04:26:02 +0000 (0:00:02.383) 0:01:21.109 ******* 2026-03-25 04:27:24.192369 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.192375 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.192381 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.192386 | orchestrator | 2026-03-25 04:27:24.192391 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-25 04:27:24.192397 | orchestrator | Wednesday 25 March 2026 04:26:04 +0000 (0:00:02.003) 0:01:23.112 ******* 2026-03-25 04:27:24.192403 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.192408 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.192414 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.192420 | orchestrator | 2026-03-25 04:27:24.192427 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-25 04:27:24.192432 | orchestrator | Wednesday 25 March 2026 04:26:05 +0000 (0:00:01.637) 0:01:24.749 ******* 2026-03-25 04:27:24.192438 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-25 04:27:24.192445 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-25 04:27:24.192451 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-25 04:27:24.192457 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-25 04:27:24.192463 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-25 04:27:24.192469 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-25 04:27:24.192481 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.192486 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.192492 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.192497 | orchestrator | 2026-03-25 04:27:24.192503 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-25 04:27:24.192508 | orchestrator | Wednesday 25 March 2026 04:26:29 +0000 (0:00:23.519) 0:01:48.268 ******* 2026-03-25 04:27:24.192514 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:27:24.192520 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:27:24.192525 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:27:24.192531 | orchestrator | 2026-03-25 04:27:24.192537 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-25 04:27:24.192542 | orchestrator | Wednesday 25 March 2026 04:26:31 +0000 (0:00:01.525) 0:01:49.794 ******* 2026-03-25 04:27:24.192547 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:27:24.192553 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:27:24.192558 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:27:24.192564 | orchestrator | 2026-03-25 04:27:24.192570 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-25 04:27:24.192576 | orchestrator | Wednesday 25 March 2026 04:26:33 +0000 (0:00:02.218) 0:01:52.013 ******* 2026-03-25 04:27:24.192581 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.192586 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.192615 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.192622 | orchestrator | 2026-03-25 04:27:24.192628 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-25 04:27:24.192645 | orchestrator | Wednesday 25 March 2026 04:26:35 +0000 (0:00:02.526) 0:01:54.539 ******* 2026-03-25 04:27:24.192651 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:27:24.192657 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:27:24.192663 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:27:24.192669 | orchestrator | 2026-03-25 04:27:24.192675 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-25 04:27:24.192681 | orchestrator | Wednesday 25 March 2026 04:27:18 +0000 (0:00:42.480) 0:02:37.020 ******* 2026-03-25 04:27:24.192688 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.192694 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.192700 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.192707 | orchestrator | 2026-03-25 04:27:24.192714 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-25 04:27:24.192720 | orchestrator | Wednesday 25 March 2026 04:27:20 +0000 (0:00:01.847) 0:02:38.867 ******* 2026-03-25 04:27:24.192726 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:27:24.192732 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:27:24.192738 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:27:24.192743 | orchestrator | 2026-03-25 04:27:24.192749 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-25 04:27:24.192756 | orchestrator | Wednesday 25 March 2026 04:27:21 +0000 (0:00:01.871) 0:02:40.739 ******* 2026-03-25 04:27:24.192763 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:27:24.192769 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:27:24.192775 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:27:24.192781 | orchestrator | 2026-03-25 04:27:24.192796 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-25 04:28:18.258968 | orchestrator | Wednesday 25 March 2026 04:27:24 +0000 (0:00:02.221) 0:02:42.960 ******* 2026-03-25 04:28:18.259046 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:28:18.259053 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:28:18.259058 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:28:18.259062 | orchestrator | 2026-03-25 04:28:18.259067 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-25 04:28:18.259071 | orchestrator | Wednesday 25 March 2026 04:27:26 +0000 (0:00:01.995) 0:02:44.956 ******* 2026-03-25 04:28:18.259075 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:28:18.259093 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:28:18.259107 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:28:18.259111 | orchestrator | 2026-03-25 04:28:18.259115 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-25 04:28:18.259119 | orchestrator | Wednesday 25 March 2026 04:27:27 +0000 (0:00:01.649) 0:02:46.605 ******* 2026-03-25 04:28:18.259123 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:28:18.259128 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:28:18.259131 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:28:18.259135 | orchestrator | 2026-03-25 04:28:18.259139 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-25 04:28:18.259143 | orchestrator | Wednesday 25 March 2026 04:27:29 +0000 (0:00:01.934) 0:02:48.540 ******* 2026-03-25 04:28:18.259146 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:28:18.259150 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:28:18.259154 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:28:18.259157 | orchestrator | 2026-03-25 04:28:18.259161 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-25 04:28:18.259165 | orchestrator | Wednesday 25 March 2026 04:27:31 +0000 (0:00:02.242) 0:02:50.783 ******* 2026-03-25 04:28:18.259169 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:28:18.259172 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:28:18.259176 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:28:18.259180 | orchestrator | 2026-03-25 04:28:18.259184 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-25 04:28:18.259187 | orchestrator | Wednesday 25 March 2026 04:27:33 +0000 (0:00:01.983) 0:02:52.766 ******* 2026-03-25 04:28:18.259191 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:28:18.259195 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:28:18.259198 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:28:18.259202 | orchestrator | 2026-03-25 04:28:18.259206 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-25 04:28:18.259209 | orchestrator | Wednesday 25 March 2026 04:27:36 +0000 (0:00:02.061) 0:02:54.828 ******* 2026-03-25 04:28:18.259213 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:28:18.259229 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:28:18.259233 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:28:18.259243 | orchestrator | 2026-03-25 04:28:18.259247 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-25 04:28:18.259251 | orchestrator | Wednesday 25 March 2026 04:27:37 +0000 (0:00:01.483) 0:02:56.312 ******* 2026-03-25 04:28:18.259255 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:28:18.259258 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:28:18.259262 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:28:18.259266 | orchestrator | 2026-03-25 04:28:18.259269 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-25 04:28:18.259273 | orchestrator | Wednesday 25 March 2026 04:27:39 +0000 (0:00:01.567) 0:02:57.880 ******* 2026-03-25 04:28:18.259277 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:28:18.259281 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:28:18.259284 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:28:18.259288 | orchestrator | 2026-03-25 04:28:18.259292 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-25 04:28:18.259295 | orchestrator | Wednesday 25 March 2026 04:27:40 +0000 (0:00:01.863) 0:02:59.743 ******* 2026-03-25 04:28:18.259299 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:28:18.259303 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:28:18.259307 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:28:18.259310 | orchestrator | 2026-03-25 04:28:18.259314 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-25 04:28:18.259319 | orchestrator | Wednesday 25 March 2026 04:27:42 +0000 (0:00:01.913) 0:03:01.657 ******* 2026-03-25 04:28:18.259323 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-25 04:28:18.259330 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-25 04:28:18.259334 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-25 04:28:18.259338 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-25 04:28:18.259342 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-25 04:28:18.259345 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-25 04:28:18.259350 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-25 04:28:18.259353 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-25 04:28:18.259357 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-25 04:28:18.259361 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-25 04:28:18.259365 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-25 04:28:18.259368 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-25 04:28:18.259382 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-25 04:28:18.259386 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-25 04:28:18.259390 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-25 04:28:18.259394 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-25 04:28:18.259397 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-25 04:28:18.259401 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-25 04:28:18.259405 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-25 04:28:18.259409 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-25 04:28:18.259413 | orchestrator | 2026-03-25 04:28:18.259417 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-25 04:28:18.259421 | orchestrator | 2026-03-25 04:28:18.259424 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-25 04:28:18.259428 | orchestrator | Wednesday 25 March 2026 04:27:47 +0000 (0:00:04.478) 0:03:06.135 ******* 2026-03-25 04:28:18.259432 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:28:18.259436 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:28:18.259439 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:28:18.259443 | orchestrator | 2026-03-25 04:28:18.259447 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-25 04:28:18.259451 | orchestrator | Wednesday 25 March 2026 04:27:49 +0000 (0:00:01.690) 0:03:07.826 ******* 2026-03-25 04:28:18.259454 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:28:18.259458 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:28:18.259462 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:28:18.259465 | orchestrator | 2026-03-25 04:28:18.259469 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-25 04:28:18.259473 | orchestrator | Wednesday 25 March 2026 04:27:51 +0000 (0:00:01.975) 0:03:09.801 ******* 2026-03-25 04:28:18.259477 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:28:18.259480 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:28:18.259484 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:28:18.259488 | orchestrator | 2026-03-25 04:28:18.259491 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-25 04:28:18.259495 | orchestrator | Wednesday 25 March 2026 04:27:52 +0000 (0:00:01.933) 0:03:11.734 ******* 2026-03-25 04:28:18.259503 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 04:28:18.259507 | orchestrator | 2026-03-25 04:28:18.259510 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-25 04:28:18.259514 | orchestrator | Wednesday 25 March 2026 04:27:55 +0000 (0:00:02.148) 0:03:13.883 ******* 2026-03-25 04:28:18.259518 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:28:18.259521 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:28:18.259525 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:28:18.259529 | orchestrator | 2026-03-25 04:28:18.259532 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-25 04:28:18.259536 | orchestrator | Wednesday 25 March 2026 04:27:56 +0000 (0:00:01.715) 0:03:15.598 ******* 2026-03-25 04:28:18.259540 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:28:18.259544 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:28:18.259549 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:28:18.259553 | orchestrator | 2026-03-25 04:28:18.259591 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-25 04:28:18.259596 | orchestrator | Wednesday 25 March 2026 04:27:58 +0000 (0:00:01.773) 0:03:17.372 ******* 2026-03-25 04:28:18.259606 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:28:18.259610 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:28:18.259615 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:28:18.259619 | orchestrator | 2026-03-25 04:28:18.259624 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-25 04:28:18.259628 | orchestrator | Wednesday 25 March 2026 04:28:00 +0000 (0:00:01.868) 0:03:19.240 ******* 2026-03-25 04:28:18.259633 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:28:18.259637 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:28:18.259642 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:28:18.259646 | orchestrator | 2026-03-25 04:28:18.259651 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-25 04:28:18.259655 | orchestrator | Wednesday 25 March 2026 04:28:02 +0000 (0:00:01.901) 0:03:21.142 ******* 2026-03-25 04:28:18.259660 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:28:18.259664 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:28:18.259668 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:28:18.259673 | orchestrator | 2026-03-25 04:28:18.259677 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-25 04:28:18.259682 | orchestrator | Wednesday 25 March 2026 04:28:04 +0000 (0:00:02.534) 0:03:23.676 ******* 2026-03-25 04:28:18.259686 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:28:18.259691 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:28:18.259695 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:28:18.259699 | orchestrator | 2026-03-25 04:28:18.259704 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-25 04:28:18.259708 | orchestrator | Wednesday 25 March 2026 04:28:07 +0000 (0:00:02.492) 0:03:26.168 ******* 2026-03-25 04:28:18.259713 | orchestrator | changed: [testbed-node-3] 2026-03-25 04:28:18.259717 | orchestrator | changed: [testbed-node-4] 2026-03-25 04:28:18.259722 | orchestrator | changed: [testbed-node-5] 2026-03-25 04:28:18.259726 | orchestrator | 2026-03-25 04:28:18.259731 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-25 04:28:18.259736 | orchestrator | 2026-03-25 04:28:18.259740 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-25 04:28:18.259745 | orchestrator | Wednesday 25 March 2026 04:28:15 +0000 (0:00:08.410) 0:03:34.579 ******* 2026-03-25 04:28:18.259749 | orchestrator | ok: [testbed-manager] 2026-03-25 04:28:18.259753 | orchestrator | 2026-03-25 04:28:18.259758 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-25 04:28:18.259765 | orchestrator | Wednesday 25 March 2026 04:28:18 +0000 (0:00:02.456) 0:03:37.036 ******* 2026-03-25 04:29:35.299407 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.299553 | orchestrator | 2026-03-25 04:29:35.299578 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-25 04:29:35.299633 | orchestrator | Wednesday 25 March 2026 04:28:19 +0000 (0:00:01.564) 0:03:38.601 ******* 2026-03-25 04:29:35.299655 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-25 04:29:35.299673 | orchestrator | 2026-03-25 04:29:35.299690 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-25 04:29:35.299726 | orchestrator | Wednesday 25 March 2026 04:28:21 +0000 (0:00:01.729) 0:03:40.331 ******* 2026-03-25 04:29:35.299744 | orchestrator | changed: [testbed-manager] 2026-03-25 04:29:35.299762 | orchestrator | 2026-03-25 04:29:35.299780 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-25 04:29:35.299798 | orchestrator | Wednesday 25 March 2026 04:28:23 +0000 (0:00:02.066) 0:03:42.397 ******* 2026-03-25 04:29:35.299815 | orchestrator | changed: [testbed-manager] 2026-03-25 04:29:35.299832 | orchestrator | 2026-03-25 04:29:35.299851 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-25 04:29:35.299902 | orchestrator | Wednesday 25 March 2026 04:28:25 +0000 (0:00:01.889) 0:03:44.287 ******* 2026-03-25 04:29:35.299920 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-25 04:29:35.299939 | orchestrator | 2026-03-25 04:29:35.299959 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-25 04:29:35.299978 | orchestrator | Wednesday 25 March 2026 04:28:28 +0000 (0:00:03.500) 0:03:47.788 ******* 2026-03-25 04:29:35.299998 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-25 04:29:35.300019 | orchestrator | 2026-03-25 04:29:35.300039 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-25 04:29:35.300060 | orchestrator | Wednesday 25 March 2026 04:28:31 +0000 (0:00:02.203) 0:03:49.991 ******* 2026-03-25 04:29:35.300079 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300098 | orchestrator | 2026-03-25 04:29:35.300116 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-25 04:29:35.300154 | orchestrator | Wednesday 25 March 2026 04:28:32 +0000 (0:00:01.571) 0:03:51.562 ******* 2026-03-25 04:29:35.300177 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300199 | orchestrator | 2026-03-25 04:29:35.300220 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-25 04:29:35.300241 | orchestrator | 2026-03-25 04:29:35.300263 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-25 04:29:35.300285 | orchestrator | Wednesday 25 March 2026 04:28:34 +0000 (0:00:01.723) 0:03:53.286 ******* 2026-03-25 04:29:35.300304 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300324 | orchestrator | 2026-03-25 04:29:35.300343 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-25 04:29:35.300362 | orchestrator | Wednesday 25 March 2026 04:28:35 +0000 (0:00:01.263) 0:03:54.549 ******* 2026-03-25 04:29:35.300382 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 04:29:35.300402 | orchestrator | 2026-03-25 04:29:35.300422 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-25 04:29:35.300440 | orchestrator | Wednesday 25 March 2026 04:28:37 +0000 (0:00:01.619) 0:03:56.168 ******* 2026-03-25 04:29:35.300457 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300474 | orchestrator | 2026-03-25 04:29:35.300490 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-25 04:29:35.300508 | orchestrator | Wednesday 25 March 2026 04:28:39 +0000 (0:00:02.067) 0:03:58.236 ******* 2026-03-25 04:29:35.300529 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300546 | orchestrator | 2026-03-25 04:29:35.300562 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-25 04:29:35.300577 | orchestrator | Wednesday 25 March 2026 04:28:42 +0000 (0:00:03.251) 0:04:01.488 ******* 2026-03-25 04:29:35.300593 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300609 | orchestrator | 2026-03-25 04:29:35.300628 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-25 04:29:35.300668 | orchestrator | Wednesday 25 March 2026 04:28:44 +0000 (0:00:01.599) 0:04:03.088 ******* 2026-03-25 04:29:35.300685 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300701 | orchestrator | 2026-03-25 04:29:35.300716 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-25 04:29:35.300732 | orchestrator | Wednesday 25 March 2026 04:28:45 +0000 (0:00:01.630) 0:04:04.718 ******* 2026-03-25 04:29:35.300747 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300763 | orchestrator | 2026-03-25 04:29:35.300780 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-25 04:29:35.300796 | orchestrator | Wednesday 25 March 2026 04:28:47 +0000 (0:00:01.834) 0:04:06.553 ******* 2026-03-25 04:29:35.300812 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300828 | orchestrator | 2026-03-25 04:29:35.300844 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-25 04:29:35.300910 | orchestrator | Wednesday 25 March 2026 04:28:50 +0000 (0:00:02.866) 0:04:09.419 ******* 2026-03-25 04:29:35.300932 | orchestrator | ok: [testbed-manager] 2026-03-25 04:29:35.300949 | orchestrator | 2026-03-25 04:29:35.300965 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-25 04:29:35.300981 | orchestrator | 2026-03-25 04:29:35.300996 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-25 04:29:35.301013 | orchestrator | Wednesday 25 March 2026 04:28:52 +0000 (0:00:01.779) 0:04:11.199 ******* 2026-03-25 04:29:35.301028 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:29:35.301044 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:29:35.301060 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:29:35.301076 | orchestrator | 2026-03-25 04:29:35.301093 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-25 04:29:35.301111 | orchestrator | Wednesday 25 March 2026 04:28:53 +0000 (0:00:01.482) 0:04:12.681 ******* 2026-03-25 04:29:35.301130 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:29:35.301148 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:29:35.301164 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:29:35.301179 | orchestrator | 2026-03-25 04:29:35.301231 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-25 04:29:35.301249 | orchestrator | Wednesday 25 March 2026 04:28:55 +0000 (0:00:01.827) 0:04:14.509 ******* 2026-03-25 04:29:35.301265 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:29:35.301281 | orchestrator | 2026-03-25 04:29:35.301298 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-25 04:29:35.301315 | orchestrator | Wednesday 25 March 2026 04:28:57 +0000 (0:00:01.961) 0:04:16.470 ******* 2026-03-25 04:29:35.301331 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-25 04:29:35.301347 | orchestrator | 2026-03-25 04:29:35.301364 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-25 04:29:35.301383 | orchestrator | Wednesday 25 March 2026 04:28:59 +0000 (0:00:02.111) 0:04:18.582 ******* 2026-03-25 04:29:35.301400 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 04:29:35.301415 | orchestrator | 2026-03-25 04:29:35.301431 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-25 04:29:35.301446 | orchestrator | Wednesday 25 March 2026 04:29:02 +0000 (0:00:03.170) 0:04:21.753 ******* 2026-03-25 04:29:35.301461 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:29:35.301478 | orchestrator | 2026-03-25 04:29:35.301494 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-25 04:29:35.301511 | orchestrator | Wednesday 25 March 2026 04:29:04 +0000 (0:00:01.273) 0:04:23.027 ******* 2026-03-25 04:29:35.301526 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 04:29:35.301542 | orchestrator | 2026-03-25 04:29:35.301557 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-25 04:29:35.301573 | orchestrator | Wednesday 25 March 2026 04:29:06 +0000 (0:00:02.133) 0:04:25.160 ******* 2026-03-25 04:29:35.301605 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 04:29:35.301622 | orchestrator | 2026-03-25 04:29:35.301638 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-25 04:29:35.301653 | orchestrator | Wednesday 25 March 2026 04:29:08 +0000 (0:00:02.388) 0:04:27.549 ******* 2026-03-25 04:29:35.301669 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 04:29:35.301686 | orchestrator | 2026-03-25 04:29:35.301702 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-25 04:29:35.301718 | orchestrator | Wednesday 25 March 2026 04:29:09 +0000 (0:00:01.232) 0:04:28.782 ******* 2026-03-25 04:29:35.301733 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 04:29:35.301748 | orchestrator | 2026-03-25 04:29:35.301765 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-25 04:29:35.301782 | orchestrator | Wednesday 25 March 2026 04:29:11 +0000 (0:00:01.192) 0:04:29.975 ******* 2026-03-25 04:29:35.301798 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-03-25 04:29:35.301814 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-03-25 04:29:35.301834 | orchestrator | } 2026-03-25 04:29:35.301851 | orchestrator | 2026-03-25 04:29:35.301949 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-25 04:29:35.301970 | orchestrator | Wednesday 25 March 2026 04:29:12 +0000 (0:00:01.215) 0:04:31.190 ******* 2026-03-25 04:29:35.301988 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:29:35.302006 | orchestrator | 2026-03-25 04:29:35.302093 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-25 04:29:35.302111 | orchestrator | Wednesday 25 March 2026 04:29:13 +0000 (0:00:01.202) 0:04:32.393 ******* 2026-03-25 04:29:35.302129 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-25 04:29:35.302147 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-25 04:29:35.302164 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-25 04:29:35.302182 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-25 04:29:35.302200 | orchestrator | 2026-03-25 04:29:35.302219 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-25 04:29:35.302238 | orchestrator | Wednesday 25 March 2026 04:29:19 +0000 (0:00:05.836) 0:04:38.229 ******* 2026-03-25 04:29:35.302255 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-25 04:29:35.302273 | orchestrator | 2026-03-25 04:29:35.302309 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-25 04:29:35.302328 | orchestrator | Wednesday 25 March 2026 04:29:22 +0000 (0:00:02.613) 0:04:40.843 ******* 2026-03-25 04:29:35.302346 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-25 04:29:35.302364 | orchestrator | 2026-03-25 04:29:35.302381 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-25 04:29:35.302398 | orchestrator | Wednesday 25 March 2026 04:29:24 +0000 (0:00:02.707) 0:04:43.551 ******* 2026-03-25 04:29:35.302415 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-25 04:29:35.302433 | orchestrator | 2026-03-25 04:29:35.302451 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-25 04:29:35.302469 | orchestrator | Wednesday 25 March 2026 04:29:28 +0000 (0:00:04.227) 0:04:47.778 ******* 2026-03-25 04:29:35.302488 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:29:35.302506 | orchestrator | 2026-03-25 04:29:35.302525 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-25 04:29:35.302541 | orchestrator | Wednesday 25 March 2026 04:29:30 +0000 (0:00:01.329) 0:04:49.107 ******* 2026-03-25 04:29:35.302559 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-25 04:29:35.302579 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-25 04:29:35.302599 | orchestrator | 2026-03-25 04:29:35.302635 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-25 04:29:35.302654 | orchestrator | Wednesday 25 March 2026 04:29:33 +0000 (0:00:03.489) 0:04:52.597 ******* 2026-03-25 04:29:35.302673 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:29:35.302714 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:30:03.125298 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:30:03.125385 | orchestrator | 2026-03-25 04:30:03.125393 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-25 04:30:03.125400 | orchestrator | Wednesday 25 March 2026 04:29:35 +0000 (0:00:01.478) 0:04:54.076 ******* 2026-03-25 04:30:03.125405 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:30:03.125410 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:30:03.125415 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:30:03.125420 | orchestrator | 2026-03-25 04:30:03.125438 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-25 04:30:03.125443 | orchestrator | 2026-03-25 04:30:03.125448 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-25 04:30:03.125452 | orchestrator | Wednesday 25 March 2026 04:29:37 +0000 (0:00:02.236) 0:04:56.313 ******* 2026-03-25 04:30:03.125457 | orchestrator | ok: [testbed-manager] 2026-03-25 04:30:03.125462 | orchestrator | 2026-03-25 04:30:03.125466 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-25 04:30:03.125471 | orchestrator | Wednesday 25 March 2026 04:29:38 +0000 (0:00:01.149) 0:04:57.463 ******* 2026-03-25 04:30:03.125476 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-25 04:30:03.125481 | orchestrator | 2026-03-25 04:30:03.125486 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-25 04:30:03.125491 | orchestrator | Wednesday 25 March 2026 04:29:40 +0000 (0:00:01.491) 0:04:58.955 ******* 2026-03-25 04:30:03.125495 | orchestrator | ok: [testbed-manager] 2026-03-25 04:30:03.125500 | orchestrator | 2026-03-25 04:30:03.125504 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-25 04:30:03.125509 | orchestrator | 2026-03-25 04:30:03.125513 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-25 04:30:03.125518 | orchestrator | Wednesday 25 March 2026 04:29:45 +0000 (0:00:05.523) 0:05:04.478 ******* 2026-03-25 04:30:03.125522 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:30:03.125527 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:30:03.125532 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:30:03.125536 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:30:03.125540 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:30:03.125545 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:30:03.125550 | orchestrator | 2026-03-25 04:30:03.125555 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-25 04:30:03.125559 | orchestrator | Wednesday 25 March 2026 04:29:47 +0000 (0:00:02.054) 0:05:06.534 ******* 2026-03-25 04:30:03.125564 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-25 04:30:03.125569 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-25 04:30:03.125573 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-25 04:30:03.125578 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-25 04:30:03.125582 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-25 04:30:03.125587 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-25 04:30:03.125591 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-25 04:30:03.125596 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-25 04:30:03.125600 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-25 04:30:03.125619 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-25 04:30:03.125624 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-25 04:30:03.125629 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-25 04:30:03.125633 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-25 04:30:03.125638 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-25 04:30:03.125642 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-25 04:30:03.125647 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-25 04:30:03.125651 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-25 04:30:03.125656 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-25 04:30:03.125660 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-25 04:30:03.125665 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-25 04:30:03.125669 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-25 04:30:03.125673 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-25 04:30:03.125678 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-25 04:30:03.125682 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-25 04:30:03.125687 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-25 04:30:03.125691 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-25 04:30:03.125706 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-25 04:30:03.125711 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-25 04:30:03.125716 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-25 04:30:03.125721 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-25 04:30:03.125725 | orchestrator | 2026-03-25 04:30:03.125733 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-25 04:30:03.125738 | orchestrator | Wednesday 25 March 2026 04:29:58 +0000 (0:00:10.439) 0:05:16.974 ******* 2026-03-25 04:30:03.125742 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:30:03.125747 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:30:03.125751 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:30:03.125756 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:30:03.125760 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:30:03.125765 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:30:03.125769 | orchestrator | 2026-03-25 04:30:03.125774 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-25 04:30:03.125778 | orchestrator | Wednesday 25 March 2026 04:30:00 +0000 (0:00:02.366) 0:05:19.340 ******* 2026-03-25 04:30:03.125783 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:30:03.125787 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:30:03.125792 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:30:03.125796 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:30:03.125801 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:30:03.125805 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:30:03.125810 | orchestrator | 2026-03-25 04:30:03.125814 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:30:03.125819 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 04:30:03.125830 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-25 04:30:03.125835 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-25 04:30:03.125839 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-25 04:30:03.125844 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 04:30:03.125848 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 04:30:03.125853 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-25 04:30:03.125857 | orchestrator | 2026-03-25 04:30:03.125862 | orchestrator | 2026-03-25 04:30:03.125867 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:30:03.125871 | orchestrator | Wednesday 25 March 2026 04:30:03 +0000 (0:00:02.535) 0:05:21.875 ******* 2026-03-25 04:30:03.125876 | orchestrator | =============================================================================== 2026-03-25 04:30:03.125880 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 42.48s 2026-03-25 04:30:03.125885 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.52s 2026-03-25 04:30:03.125890 | orchestrator | Manage labels ---------------------------------------------------------- 10.44s 2026-03-25 04:30:03.125895 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.41s 2026-03-25 04:30:03.125899 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.84s 2026-03-25 04:30:03.125904 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.52s 2026-03-25 04:30:03.125908 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.63s 2026-03-25 04:30:03.125913 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.48s 2026-03-25 04:30:03.125917 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.23s 2026-03-25 04:30:03.125922 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 3.74s 2026-03-25 04:30:03.125926 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.50s 2026-03-25 04:30:03.125931 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.49s 2026-03-25 04:30:03.125935 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.29s 2026-03-25 04:30:03.125940 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 3.25s 2026-03-25 04:30:03.125944 | orchestrator | k3s_server_post : Wait for connectivity to kube VIP --------------------- 3.17s 2026-03-25 04:30:03.125949 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.09s 2026-03-25 04:30:03.125953 | orchestrator | kubectl : Install required packages ------------------------------------- 2.87s 2026-03-25 04:30:03.125958 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.71s 2026-03-25 04:30:03.125965 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.71s 2026-03-25 04:30:03.633279 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.70s 2026-03-25 04:30:04.016687 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-25 04:30:04.016783 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-03-25 04:30:04.023962 | orchestrator | + set -e 2026-03-25 04:30:04.024067 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 04:30:04.024099 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 04:30:04.024107 | orchestrator | ++ INTERACTIVE=false 2026-03-25 04:30:04.024129 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 04:30:04.024136 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 04:30:04.024142 | orchestrator | + osism apply openstackclient 2026-03-25 04:30:16.297318 | orchestrator | 2026-03-25 04:30:16 | INFO  | Task 865d2682-c8ae-4247-ae5d-ccb3a35c937e (openstackclient) was prepared for execution. 2026-03-25 04:30:16.297424 | orchestrator | 2026-03-25 04:30:16 | INFO  | It takes a moment until task 865d2682-c8ae-4247-ae5d-ccb3a35c937e (openstackclient) has been started and output is visible here. 2026-03-25 04:30:54.194160 | orchestrator | 2026-03-25 04:30:54.194293 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-25 04:30:54.194305 | orchestrator | 2026-03-25 04:30:54.194312 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-25 04:30:54.194319 | orchestrator | Wednesday 25 March 2026 04:30:23 +0000 (0:00:01.841) 0:00:01.841 ******* 2026-03-25 04:30:54.194326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-25 04:30:54.194335 | orchestrator | 2026-03-25 04:30:54.194341 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-25 04:30:54.194347 | orchestrator | Wednesday 25 March 2026 04:30:24 +0000 (0:00:01.866) 0:00:03.707 ******* 2026-03-25 04:30:54.194354 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-25 04:30:54.194362 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-25 04:30:54.194368 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-25 04:30:54.194375 | orchestrator | 2026-03-25 04:30:54.194381 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-25 04:30:54.194387 | orchestrator | Wednesday 25 March 2026 04:30:27 +0000 (0:00:02.325) 0:00:06.033 ******* 2026-03-25 04:30:54.194394 | orchestrator | changed: [testbed-manager] 2026-03-25 04:30:54.194400 | orchestrator | 2026-03-25 04:30:54.194406 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-25 04:30:54.194412 | orchestrator | Wednesday 25 March 2026 04:30:29 +0000 (0:00:02.417) 0:00:08.450 ******* 2026-03-25 04:30:54.194419 | orchestrator | ok: [testbed-manager] 2026-03-25 04:30:54.194427 | orchestrator | 2026-03-25 04:30:54.194433 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-25 04:30:54.194439 | orchestrator | Wednesday 25 March 2026 04:30:31 +0000 (0:00:02.095) 0:00:10.546 ******* 2026-03-25 04:30:54.194445 | orchestrator | ok: [testbed-manager] 2026-03-25 04:30:54.194451 | orchestrator | 2026-03-25 04:30:54.194457 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-25 04:30:54.194463 | orchestrator | Wednesday 25 March 2026 04:30:33 +0000 (0:00:02.025) 0:00:12.571 ******* 2026-03-25 04:30:54.194470 | orchestrator | ok: [testbed-manager] 2026-03-25 04:30:54.194476 | orchestrator | 2026-03-25 04:30:54.194482 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-25 04:30:54.194488 | orchestrator | Wednesday 25 March 2026 04:30:35 +0000 (0:00:01.451) 0:00:14.023 ******* 2026-03-25 04:30:54.194494 | orchestrator | changed: [testbed-manager] 2026-03-25 04:30:54.194500 | orchestrator | 2026-03-25 04:30:54.194507 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-25 04:30:54.194513 | orchestrator | Wednesday 25 March 2026 04:30:47 +0000 (0:00:12.722) 0:00:26.745 ******* 2026-03-25 04:30:54.194519 | orchestrator | changed: [testbed-manager] 2026-03-25 04:30:54.194525 | orchestrator | 2026-03-25 04:30:54.194531 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-25 04:30:54.194538 | orchestrator | Wednesday 25 March 2026 04:30:50 +0000 (0:00:02.178) 0:00:28.923 ******* 2026-03-25 04:30:54.194544 | orchestrator | changed: [testbed-manager] 2026-03-25 04:30:54.194550 | orchestrator | 2026-03-25 04:30:54.194556 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-25 04:30:54.194583 | orchestrator | Wednesday 25 March 2026 04:30:51 +0000 (0:00:01.653) 0:00:30.577 ******* 2026-03-25 04:30:54.194590 | orchestrator | ok: [testbed-manager] 2026-03-25 04:30:54.194600 | orchestrator | 2026-03-25 04:30:54.194611 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:30:54.194621 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-25 04:30:54.194632 | orchestrator | 2026-03-25 04:30:54.194641 | orchestrator | 2026-03-25 04:30:54.194650 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:30:54.194660 | orchestrator | Wednesday 25 March 2026 04:30:53 +0000 (0:00:02.004) 0:00:32.581 ******* 2026-03-25 04:30:54.194670 | orchestrator | =============================================================================== 2026-03-25 04:30:54.194679 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 12.72s 2026-03-25 04:30:54.194688 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.42s 2026-03-25 04:30:54.194698 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.33s 2026-03-25 04:30:54.194707 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.18s 2026-03-25 04:30:54.194717 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.10s 2026-03-25 04:30:54.194726 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.03s 2026-03-25 04:30:54.194735 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 2.00s 2026-03-25 04:30:54.194745 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.87s 2026-03-25 04:30:54.194754 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.65s 2026-03-25 04:30:54.194765 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.45s 2026-03-25 04:30:54.589483 | orchestrator | + osism apply -a upgrade common 2026-03-25 04:30:56.981465 | orchestrator | 2026-03-25 04:30:56 | INFO  | Task d2e7a400-b7b5-46a6-a713-04d4109102c8 (common) was prepared for execution. 2026-03-25 04:30:56.981603 | orchestrator | 2026-03-25 04:30:56 | INFO  | It takes a moment until task d2e7a400-b7b5-46a6-a713-04d4109102c8 (common) has been started and output is visible here. 2026-03-25 04:31:19.629980 | orchestrator | 2026-03-25 04:31:19.630125 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-25 04:31:19.630137 | orchestrator | 2026-03-25 04:31:19.630143 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-25 04:31:19.630150 | orchestrator | Wednesday 25 March 2026 04:31:04 +0000 (0:00:02.325) 0:00:02.325 ******* 2026-03-25 04:31:19.630179 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 04:31:19.630188 | orchestrator | 2026-03-25 04:31:19.630195 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-25 04:31:19.630201 | orchestrator | Wednesday 25 March 2026 04:31:08 +0000 (0:00:04.822) 0:00:07.147 ******* 2026-03-25 04:31:19.630208 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:31:19.630215 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:31:19.630221 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:31:19.630228 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:31:19.630234 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:31:19.630241 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:31:19.630247 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:31:19.630273 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:31:19.630280 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:31:19.630287 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:31:19.630293 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:31:19.630300 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:31:19.630306 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:31:19.630312 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:31:19.630318 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:31:19.630325 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:31:19.630374 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:31:19.630381 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:31:19.630387 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:31:19.630393 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:31:19.630399 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:31:19.630406 | orchestrator | 2026-03-25 04:31:19.630412 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-25 04:31:19.630418 | orchestrator | Wednesday 25 March 2026 04:31:13 +0000 (0:00:04.478) 0:00:11.626 ******* 2026-03-25 04:31:19.630424 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 04:31:19.630432 | orchestrator | 2026-03-25 04:31:19.630438 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-25 04:31:19.630444 | orchestrator | Wednesday 25 March 2026 04:31:16 +0000 (0:00:03.388) 0:00:15.014 ******* 2026-03-25 04:31:19.630454 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:19.630475 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:19.630504 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:19.630513 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:19.630527 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:19.630535 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:19.630681 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:19.630690 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:19.630698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:19.630717 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533396 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533495 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533507 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533531 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533547 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533556 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533564 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533587 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533616 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533625 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533632 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:22.533640 | orchestrator | 2026-03-25 04:31:22.533649 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-25 04:31:22.533657 | orchestrator | Wednesday 25 March 2026 04:31:21 +0000 (0:00:04.725) 0:00:19.740 ******* 2026-03-25 04:31:22.533671 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:22.533681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:22.533688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:22.533697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:22.533720 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.792983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793146 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:31:24.793204 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:24.793232 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:31:24.793245 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:31:24.793257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:24.793293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:24.793341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:24.793449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793461 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:31:24.793474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793497 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:31:24.793511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:24.793540 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:31:24.793570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.222748 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:31:28.222834 | orchestrator | 2026-03-25 04:31:28.222845 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-25 04:31:28.222853 | orchestrator | Wednesday 25 March 2026 04:31:24 +0000 (0:00:03.201) 0:00:22.942 ******* 2026-03-25 04:31:28.222862 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:28.222885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:28.222893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:28.222900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.222926 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.222934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.222942 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:31:28.222962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.222970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:28.222976 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.222983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.222990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:28.223002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.223014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.223021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:28.223027 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:31:28.223033 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:31:28.223045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:41.771969 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:31:41.772067 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:31:41.772086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:41.772117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:41.772151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:31:41.772163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:41.772175 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:31:41.772185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:41.772196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:31:41.772209 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:31:41.772226 | orchestrator | 2026-03-25 04:31:41.772243 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-25 04:31:41.772262 | orchestrator | Wednesday 25 March 2026 04:31:28 +0000 (0:00:03.439) 0:00:26.382 ******* 2026-03-25 04:31:41.772277 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:31:41.772293 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:31:41.772309 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:31:41.772326 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:31:41.772342 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:31:41.772359 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:31:41.772376 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:31:41.772392 | orchestrator | 2026-03-25 04:31:41.772405 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-25 04:31:41.772414 | orchestrator | Wednesday 25 March 2026 04:31:31 +0000 (0:00:02.860) 0:00:29.242 ******* 2026-03-25 04:31:41.772454 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:31:41.772464 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:31:41.772473 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:31:41.772502 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:31:41.772513 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:31:41.772524 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:31:41.772534 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:31:41.772545 | orchestrator | 2026-03-25 04:31:41.772556 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-25 04:31:41.772566 | orchestrator | Wednesday 25 March 2026 04:31:33 +0000 (0:00:02.250) 0:00:31.493 ******* 2026-03-25 04:31:41.772577 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:31:41.772587 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:31:41.772606 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:31:41.772616 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:31:41.772626 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:31:41.772635 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:31:41.772645 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:31:41.772721 | orchestrator | 2026-03-25 04:31:41.772731 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-25 04:31:41.772741 | orchestrator | Wednesday 25 March 2026 04:31:35 +0000 (0:00:02.144) 0:00:33.637 ******* 2026-03-25 04:31:41.772751 | orchestrator | changed: [testbed-manager] 2026-03-25 04:31:41.772761 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:31:41.772770 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:31:41.772779 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:31:41.772789 | orchestrator | changed: [testbed-node-3] 2026-03-25 04:31:41.772798 | orchestrator | changed: [testbed-node-4] 2026-03-25 04:31:41.772808 | orchestrator | changed: [testbed-node-5] 2026-03-25 04:31:41.772817 | orchestrator | 2026-03-25 04:31:41.772827 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-25 04:31:41.772843 | orchestrator | Wednesday 25 March 2026 04:31:38 +0000 (0:00:03.080) 0:00:36.718 ******* 2026-03-25 04:31:41.772855 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:41.772866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:41.772876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:41.772886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:41.772901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:41.772942 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.866839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.866944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:44.866954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:31:44.866960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.866966 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.866971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.866995 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.867015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.867022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.867027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.867037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.867043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.867048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.867053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.867063 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:31:44.867069 | orchestrator | 2026-03-25 04:31:44.867075 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-25 04:31:44.867081 | orchestrator | Wednesday 25 March 2026 04:31:43 +0000 (0:00:05.284) 0:00:42.002 ******* 2026-03-25 04:31:44.867086 | orchestrator | [WARNING]: Skipped 2026-03-25 04:31:44.867092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-25 04:31:44.867101 | orchestrator | to this access issue: 2026-03-25 04:32:06.240366 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-25 04:32:06.240452 | orchestrator | directory 2026-03-25 04:32:06.240463 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 04:32:06.240472 | orchestrator | 2026-03-25 04:32:06.240479 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-25 04:32:06.240488 | orchestrator | Wednesday 25 March 2026 04:31:46 +0000 (0:00:02.575) 0:00:44.578 ******* 2026-03-25 04:32:06.240494 | orchestrator | [WARNING]: Skipped 2026-03-25 04:32:06.240501 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-25 04:32:06.240507 | orchestrator | to this access issue: 2026-03-25 04:32:06.240564 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-25 04:32:06.240571 | orchestrator | directory 2026-03-25 04:32:06.240577 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 04:32:06.240583 | orchestrator | 2026-03-25 04:32:06.240590 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-25 04:32:06.240609 | orchestrator | Wednesday 25 March 2026 04:31:48 +0000 (0:00:02.025) 0:00:46.603 ******* 2026-03-25 04:32:06.240616 | orchestrator | [WARNING]: Skipped 2026-03-25 04:32:06.240622 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-25 04:32:06.240629 | orchestrator | to this access issue: 2026-03-25 04:32:06.240635 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-25 04:32:06.240642 | orchestrator | directory 2026-03-25 04:32:06.240648 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 04:32:06.240654 | orchestrator | 2026-03-25 04:32:06.240660 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-25 04:32:06.240666 | orchestrator | Wednesday 25 March 2026 04:31:50 +0000 (0:00:01.982) 0:00:48.586 ******* 2026-03-25 04:32:06.240673 | orchestrator | [WARNING]: Skipped 2026-03-25 04:32:06.240679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-25 04:32:06.240685 | orchestrator | to this access issue: 2026-03-25 04:32:06.240691 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-25 04:32:06.240697 | orchestrator | directory 2026-03-25 04:32:06.240704 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 04:32:06.240710 | orchestrator | 2026-03-25 04:32:06.240716 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-25 04:32:06.240722 | orchestrator | Wednesday 25 March 2026 04:31:52 +0000 (0:00:02.030) 0:00:50.617 ******* 2026-03-25 04:32:06.240729 | orchestrator | changed: [testbed-manager] 2026-03-25 04:32:06.240735 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:32:06.240758 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:32:06.240765 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:32:06.240771 | orchestrator | changed: [testbed-node-3] 2026-03-25 04:32:06.240777 | orchestrator | changed: [testbed-node-4] 2026-03-25 04:32:06.240783 | orchestrator | changed: [testbed-node-5] 2026-03-25 04:32:06.240789 | orchestrator | 2026-03-25 04:32:06.240795 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-25 04:32:06.240802 | orchestrator | Wednesday 25 March 2026 04:31:56 +0000 (0:00:04.184) 0:00:54.801 ******* 2026-03-25 04:32:06.240808 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:32:06.240815 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:32:06.240821 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:32:06.240827 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:32:06.240833 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:32:06.240840 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:32:06.240846 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:32:06.240853 | orchestrator | 2026-03-25 04:32:06.240863 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-25 04:32:06.240873 | orchestrator | Wednesday 25 March 2026 04:32:00 +0000 (0:00:03.626) 0:00:58.428 ******* 2026-03-25 04:32:06.240883 | orchestrator | ok: [testbed-manager] 2026-03-25 04:32:06.240893 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:32:06.240903 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:32:06.240912 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:32:06.240922 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:32:06.240930 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:32:06.240940 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:32:06.240949 | orchestrator | 2026-03-25 04:32:06.240958 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-25 04:32:06.240967 | orchestrator | Wednesday 25 March 2026 04:32:03 +0000 (0:00:02.875) 0:01:01.303 ******* 2026-03-25 04:32:06.240980 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:06.241014 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:06.241033 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:06.241052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:06.241065 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:06.241079 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:06.241091 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:06.241103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:06.241122 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:15.754531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:15.754822 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:15.754858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:15.754879 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:15.754902 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:15.754928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:15.754949 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:15.755000 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:15.755023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:15.755059 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:15.755080 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:15.755096 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:15.755108 | orchestrator | 2026-03-25 04:32:15.755120 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-25 04:32:15.755133 | orchestrator | Wednesday 25 March 2026 04:32:06 +0000 (0:00:03.090) 0:01:04.394 ******* 2026-03-25 04:32:15.755144 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:32:15.755156 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:32:15.755166 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:32:15.755177 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:32:15.755187 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:32:15.755198 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:32:15.755208 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:32:15.755219 | orchestrator | 2026-03-25 04:32:15.755240 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-25 04:32:15.755251 | orchestrator | Wednesday 25 March 2026 04:32:09 +0000 (0:00:03.181) 0:01:07.576 ******* 2026-03-25 04:32:15.755262 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:32:15.755272 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:32:15.755283 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:32:15.755294 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:32:15.755305 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:32:15.755316 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:32:15.755326 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:32:15.755345 | orchestrator | 2026-03-25 04:32:15.755355 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-25 04:32:15.755366 | orchestrator | Wednesday 25 March 2026 04:32:12 +0000 (0:00:03.458) 0:01:11.034 ******* 2026-03-25 04:32:15.755387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:17.514102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:17.514241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:17.514268 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:17.514289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:17.514301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:17.514313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:17.514348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:17.514389 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:17.514403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:17.514414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:17.514425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:17.514437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:17.514453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:17.514473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:17.514499 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:20.614114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:20.614200 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:20.614212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:20.614221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:20.614230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:20.614238 | orchestrator | 2026-03-25 04:32:20.614248 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-25 04:32:20.614256 | orchestrator | Wednesday 25 March 2026 04:32:17 +0000 (0:00:04.643) 0:01:15.678 ******* 2026-03-25 04:32:20.614297 | orchestrator | changed: [testbed-manager] => { 2026-03-25 04:32:20.614307 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:32:20.614315 | orchestrator | } 2026-03-25 04:32:20.614323 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 04:32:20.614331 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:32:20.614339 | orchestrator | } 2026-03-25 04:32:20.614347 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 04:32:20.614354 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:32:20.614362 | orchestrator | } 2026-03-25 04:32:20.614370 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 04:32:20.614377 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:32:20.614385 | orchestrator | } 2026-03-25 04:32:20.614393 | orchestrator | changed: [testbed-node-3] => { 2026-03-25 04:32:20.614400 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:32:20.614408 | orchestrator | } 2026-03-25 04:32:20.614416 | orchestrator | changed: [testbed-node-4] => { 2026-03-25 04:32:20.614423 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:32:20.614431 | orchestrator | } 2026-03-25 04:32:20.614439 | orchestrator | changed: [testbed-node-5] => { 2026-03-25 04:32:20.614447 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:32:20.614454 | orchestrator | } 2026-03-25 04:32:20.614462 | orchestrator | 2026-03-25 04:32:20.614471 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 04:32:20.614479 | orchestrator | Wednesday 25 March 2026 04:32:19 +0000 (0:00:02.334) 0:01:18.012 ******* 2026-03-25 04:32:20.614489 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:20.614525 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:20.614535 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:20.614543 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:32:20.614551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:20.614560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:20.614624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:20.614635 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:32:20.614645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:20.614655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:20.614665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:20.614674 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:32:20.614690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:30.248469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:30.248551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:30.248575 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:32:30.248588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:30.248593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:30.248719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:30.248725 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:32:30.248730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:30.248737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:30.248754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:30.248759 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:32:30.248763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:30.248772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:30.248777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:30.248781 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:32:30.248785 | orchestrator | 2026-03-25 04:32:30.248790 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:32:30.248796 | orchestrator | Wednesday 25 March 2026 04:32:23 +0000 (0:00:03.222) 0:01:21.235 ******* 2026-03-25 04:32:30.248800 | orchestrator | 2026-03-25 04:32:30.248804 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:32:30.248808 | orchestrator | Wednesday 25 March 2026 04:32:23 +0000 (0:00:00.524) 0:01:21.760 ******* 2026-03-25 04:32:30.248812 | orchestrator | 2026-03-25 04:32:30.248816 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:32:30.248820 | orchestrator | Wednesday 25 March 2026 04:32:24 +0000 (0:00:00.578) 0:01:22.339 ******* 2026-03-25 04:32:30.248825 | orchestrator | 2026-03-25 04:32:30.248829 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:32:30.248833 | orchestrator | Wednesday 25 March 2026 04:32:24 +0000 (0:00:00.472) 0:01:22.811 ******* 2026-03-25 04:32:30.248837 | orchestrator | 2026-03-25 04:32:30.248841 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:32:30.248845 | orchestrator | Wednesday 25 March 2026 04:32:25 +0000 (0:00:00.468) 0:01:23.279 ******* 2026-03-25 04:32:30.248849 | orchestrator | 2026-03-25 04:32:30.248853 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:32:30.248857 | orchestrator | Wednesday 25 March 2026 04:32:25 +0000 (0:00:00.794) 0:01:24.074 ******* 2026-03-25 04:32:30.248861 | orchestrator | 2026-03-25 04:32:30.248865 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:32:30.248869 | orchestrator | Wednesday 25 March 2026 04:32:26 +0000 (0:00:00.460) 0:01:24.534 ******* 2026-03-25 04:32:30.248873 | orchestrator | 2026-03-25 04:32:30.248877 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-25 04:32:30.248883 | orchestrator | Wednesday 25 March 2026 04:32:27 +0000 (0:00:00.871) 0:01:25.406 ******* 2026-03-25 04:32:30.248905 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_fx0vj3r6/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_fx0vj3r6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_fx0vj3r6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-25 04:32:33.929895 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_thjb6ffo/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_thjb6ffo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_thjb6ffo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-25 04:32:33.930112 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_8mu2rw8y/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_8mu2rw8y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_8mu2rw8y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-25 04:32:33.930140 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_fqzpu9hh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_fqzpu9hh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_fqzpu9hh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-25 04:32:33.930169 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_2z_sqist/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_2z_sqist/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_2z_sqist/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-25 04:32:34.512458 | orchestrator | 2026-03-25 04:32:34 | INFO  | Task 92609589-7f1e-4c8b-ab5f-1f318536f1b8 (common) was prepared for execution. 2026-03-25 04:32:34.513970 | orchestrator | 2026-03-25 04:32:34 | INFO  | It takes a moment until task 92609589-7f1e-4c8b-ab5f-1f318536f1b8 (common) has been started and output is visible here. 2026-03-25 04:32:44.863497 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_pa4pzoma/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_pa4pzoma/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_pa4pzoma/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-25 04:32:44.863622 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_kmjtmldm/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_kmjtmldm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_kmjtmldm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-25 04:32:44.863631 | orchestrator | 2026-03-25 04:32:44.863638 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:32:44.863696 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-25 04:32:44.863713 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-25 04:32:44.863721 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-25 04:32:44.863726 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-25 04:32:44.863730 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-25 04:32:44.863735 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-25 04:32:44.863739 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-25 04:32:44.863744 | orchestrator | 2026-03-25 04:32:44.863749 | orchestrator | 2026-03-25 04:32:44.863753 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:32:44.863758 | orchestrator | Wednesday 25 March 2026 04:32:33 +0000 (0:00:06.694) 0:01:32.101 ******* 2026-03-25 04:32:44.863763 | orchestrator | =============================================================================== 2026-03-25 04:32:44.863767 | orchestrator | common : Restart fluentd container -------------------------------------- 6.69s 2026-03-25 04:32:44.863772 | orchestrator | common : Copying over config.json files for services -------------------- 5.28s 2026-03-25 04:32:44.863777 | orchestrator | common : include_tasks -------------------------------------------------- 4.82s 2026-03-25 04:32:44.863781 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.73s 2026-03-25 04:32:44.863786 | orchestrator | service-check-containers : common | Check containers -------------------- 4.64s 2026-03-25 04:32:44.863790 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.48s 2026-03-25 04:32:44.863795 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.18s 2026-03-25 04:32:44.863800 | orchestrator | common : Flush handlers ------------------------------------------------- 4.17s 2026-03-25 04:32:44.863804 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.63s 2026-03-25 04:32:44.863809 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.46s 2026-03-25 04:32:44.863813 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.44s 2026-03-25 04:32:44.863818 | orchestrator | common : include_tasks -------------------------------------------------- 3.39s 2026-03-25 04:32:44.863822 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.22s 2026-03-25 04:32:44.863827 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.20s 2026-03-25 04:32:44.863832 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.18s 2026-03-25 04:32:44.863837 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.09s 2026-03-25 04:32:44.863842 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.08s 2026-03-25 04:32:44.863846 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.88s 2026-03-25 04:32:44.863851 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.86s 2026-03-25 04:32:44.863855 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.58s 2026-03-25 04:32:44.863860 | orchestrator | 2026-03-25 04:32:44.863864 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-25 04:32:44.863869 | orchestrator | 2026-03-25 04:32:44.863873 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-25 04:32:44.863882 | orchestrator | Wednesday 25 March 2026 04:32:40 +0000 (0:00:01.860) 0:00:01.860 ******* 2026-03-25 04:32:44.863887 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 04:32:44.863892 | orchestrator | 2026-03-25 04:32:44.863901 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-25 04:32:54.602246 | orchestrator | Wednesday 25 March 2026 04:32:44 +0000 (0:00:03.878) 0:00:05.738 ******* 2026-03-25 04:32:54.602348 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:32:54.602361 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:32:54.602371 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:32:54.602380 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:32:54.602389 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:32:54.602399 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:32:54.602408 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:32:54.602417 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:32:54.602425 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:32:54.602435 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:32:54.602459 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-25 04:32:54.602469 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:32:54.602477 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:32:54.602486 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:32:54.602495 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:32:54.602504 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:32:54.602512 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:32:54.602521 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-25 04:32:54.602529 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:32:54.602538 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:32:54.602547 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-25 04:32:54.602556 | orchestrator | 2026-03-25 04:32:54.602565 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-25 04:32:54.602574 | orchestrator | Wednesday 25 March 2026 04:32:48 +0000 (0:00:03.825) 0:00:09.563 ******* 2026-03-25 04:32:54.602584 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 04:32:54.602594 | orchestrator | 2026-03-25 04:32:54.602603 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-25 04:32:54.602612 | orchestrator | Wednesday 25 March 2026 04:32:51 +0000 (0:00:03.124) 0:00:12.688 ******* 2026-03-25 04:32:54.602623 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:54.602658 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:54.602668 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:54.602744 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:54.602761 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:54.602770 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:54.602780 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:32:54.602791 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:54.602810 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:54.602821 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:54.602840 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.378683 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.378842 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.378862 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.378877 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.378934 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.378957 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.378977 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.378989 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.379025 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.379055 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:32:57.379077 | orchestrator | 2026-03-25 04:32:57.379100 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-25 04:32:57.379120 | orchestrator | Wednesday 25 March 2026 04:32:56 +0000 (0:00:04.594) 0:00:17.283 ******* 2026-03-25 04:32:57.379148 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:57.379178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:57.379213 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:57.379234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:57.379252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:57.379286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:59.720905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721011 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:32:59.721029 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:59.721114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:59.721132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721166 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:32:59.721184 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:32:59.721224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721244 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:32:59.721305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721327 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:32:59.721338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:59.721349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:32:59.721359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:32:59.721390 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:32:59.721410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.239950 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:33:03.240068 | orchestrator | 2026-03-25 04:33:03.240089 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-25 04:33:03.240124 | orchestrator | Wednesday 25 March 2026 04:32:59 +0000 (0:00:03.296) 0:00:20.579 ******* 2026-03-25 04:33:03.240145 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:03.240201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:03.240220 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.240236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:03.240252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.240268 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.240306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.240324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.240350 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:33:03.240366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:03.240382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.240398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:03.240414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.240429 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:33:03.240444 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:33:03.240460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.240477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:03.240496 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:33:03.240531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:15.634005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:15.634107 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:33:15.634115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:15.634138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:15.634143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:15.634148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:15.634152 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:33:15.634156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:15.634160 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:33:15.634164 | orchestrator | 2026-03-25 04:33:15.634168 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-25 04:33:15.634188 | orchestrator | Wednesday 25 March 2026 04:33:03 +0000 (0:00:03.529) 0:00:24.108 ******* 2026-03-25 04:33:15.634192 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:33:15.634196 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:33:15.634200 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:33:15.634204 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:33:15.634207 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:33:15.634211 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:33:15.634215 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:33:15.634218 | orchestrator | 2026-03-25 04:33:15.634222 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-25 04:33:15.634236 | orchestrator | Wednesday 25 March 2026 04:33:05 +0000 (0:00:02.376) 0:00:26.485 ******* 2026-03-25 04:33:15.634240 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:33:15.634244 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:33:15.634247 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:33:15.634251 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:33:15.634255 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:33:15.634258 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:33:15.634272 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:33:15.634276 | orchestrator | 2026-03-25 04:33:15.634280 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-25 04:33:15.634284 | orchestrator | Wednesday 25 March 2026 04:33:07 +0000 (0:00:02.148) 0:00:28.633 ******* 2026-03-25 04:33:15.634288 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:33:15.634291 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:33:15.634295 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:33:15.634299 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:33:15.634302 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:33:15.634306 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:33:15.634310 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:33:15.634313 | orchestrator | 2026-03-25 04:33:15.634317 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-25 04:33:15.634321 | orchestrator | Wednesday 25 March 2026 04:33:09 +0000 (0:00:02.047) 0:00:30.680 ******* 2026-03-25 04:33:15.634324 | orchestrator | ok: [testbed-manager] 2026-03-25 04:33:15.634329 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:33:15.634333 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:33:15.634337 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:33:15.634340 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:33:15.634344 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:33:15.634348 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:33:15.634351 | orchestrator | 2026-03-25 04:33:15.634355 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-25 04:33:15.634359 | orchestrator | Wednesday 25 March 2026 04:33:12 +0000 (0:00:03.112) 0:00:33.793 ******* 2026-03-25 04:33:15.634363 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:15.634368 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:15.634376 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:15.634380 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:15.634384 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:15.634394 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566351 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:18.566425 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:18.566431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566453 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566458 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566462 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566477 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566494 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566499 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566503 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566510 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566515 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566519 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566522 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566530 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:18.566535 | orchestrator | 2026-03-25 04:33:18.566540 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-25 04:33:18.566546 | orchestrator | Wednesday 25 March 2026 04:33:17 +0000 (0:00:04.679) 0:00:38.472 ******* 2026-03-25 04:33:18.566550 | orchestrator | [WARNING]: Skipped 2026-03-25 04:33:18.566557 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-25 04:33:38.629271 | orchestrator | to this access issue: 2026-03-25 04:33:38.629377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-25 04:33:38.629392 | orchestrator | directory 2026-03-25 04:33:38.629405 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 04:33:38.629417 | orchestrator | 2026-03-25 04:33:38.629429 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-25 04:33:38.629441 | orchestrator | Wednesday 25 March 2026 04:33:20 +0000 (0:00:02.479) 0:00:40.952 ******* 2026-03-25 04:33:38.629451 | orchestrator | [WARNING]: Skipped 2026-03-25 04:33:38.629462 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-25 04:33:38.629473 | orchestrator | to this access issue: 2026-03-25 04:33:38.629484 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-25 04:33:38.629495 | orchestrator | directory 2026-03-25 04:33:38.629505 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 04:33:38.629516 | orchestrator | 2026-03-25 04:33:38.629527 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-25 04:33:38.629560 | orchestrator | Wednesday 25 March 2026 04:33:22 +0000 (0:00:01.988) 0:00:42.941 ******* 2026-03-25 04:33:38.629572 | orchestrator | [WARNING]: Skipped 2026-03-25 04:33:38.629583 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-25 04:33:38.629593 | orchestrator | to this access issue: 2026-03-25 04:33:38.629604 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-25 04:33:38.629615 | orchestrator | directory 2026-03-25 04:33:38.629626 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 04:33:38.629636 | orchestrator | 2026-03-25 04:33:38.629647 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-25 04:33:38.629658 | orchestrator | Wednesday 25 March 2026 04:33:23 +0000 (0:00:01.919) 0:00:44.861 ******* 2026-03-25 04:33:38.629668 | orchestrator | [WARNING]: Skipped 2026-03-25 04:33:38.629679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-25 04:33:38.629689 | orchestrator | to this access issue: 2026-03-25 04:33:38.629700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-25 04:33:38.629710 | orchestrator | directory 2026-03-25 04:33:38.629721 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-25 04:33:38.629732 | orchestrator | 2026-03-25 04:33:38.629743 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-25 04:33:38.629753 | orchestrator | Wednesday 25 March 2026 04:33:25 +0000 (0:00:01.897) 0:00:46.758 ******* 2026-03-25 04:33:38.629764 | orchestrator | ok: [testbed-manager] 2026-03-25 04:33:38.629775 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:33:38.629785 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:33:38.629796 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:33:38.629809 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:33:38.629864 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:33:38.629881 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:33:38.629893 | orchestrator | 2026-03-25 04:33:38.629906 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-25 04:33:38.629918 | orchestrator | Wednesday 25 March 2026 04:33:29 +0000 (0:00:03.854) 0:00:50.613 ******* 2026-03-25 04:33:38.629930 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:33:38.629944 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:33:38.629956 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:33:38.629968 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:33:38.629980 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:33:38.629993 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:33:38.630005 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-25 04:33:38.630066 | orchestrator | 2026-03-25 04:33:38.630088 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-25 04:33:38.630107 | orchestrator | Wednesday 25 March 2026 04:33:32 +0000 (0:00:03.204) 0:00:53.817 ******* 2026-03-25 04:33:38.630124 | orchestrator | ok: [testbed-manager] 2026-03-25 04:33:38.630136 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:33:38.630149 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:33:38.630161 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:33:38.630172 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:33:38.630183 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:33:38.630194 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:33:38.630205 | orchestrator | 2026-03-25 04:33:38.630216 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-25 04:33:38.630237 | orchestrator | Wednesday 25 March 2026 04:33:35 +0000 (0:00:02.784) 0:00:56.602 ******* 2026-03-25 04:33:38.630263 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:38.630297 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:38.630310 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:38.630322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:38.630334 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:38.630348 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:38.630359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:38.630384 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:38.630404 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:47.381722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:47.381904 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:47.381937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:47.381960 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:47.381982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:47.382068 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:47.382104 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:47.382152 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:47.382177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:47.382193 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:47.382206 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:47.382219 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:47.382234 | orchestrator | 2026-03-25 04:33:47.382249 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-25 04:33:47.382262 | orchestrator | Wednesday 25 March 2026 04:33:38 +0000 (0:00:02.896) 0:00:59.498 ******* 2026-03-25 04:33:47.382275 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:33:47.382298 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:33:47.382309 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:33:47.382319 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:33:47.382330 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:33:47.382340 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:33:47.382350 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-25 04:33:47.382361 | orchestrator | 2026-03-25 04:33:47.382381 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-25 04:33:47.382398 | orchestrator | Wednesday 25 March 2026 04:33:41 +0000 (0:00:03.144) 0:01:02.643 ******* 2026-03-25 04:33:47.382417 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:33:47.382435 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:33:47.382452 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:33:47.382463 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:33:47.382474 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:33:47.382484 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:33:47.382497 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-25 04:33:47.382515 | orchestrator | 2026-03-25 04:33:47.382533 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-25 04:33:47.382551 | orchestrator | Wednesday 25 March 2026 04:33:44 +0000 (0:00:03.193) 0:01:05.837 ******* 2026-03-25 04:33:47.382582 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:49.559234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:49.559336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:49.559352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:49.559392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:49.559405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:49.559430 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:49.559442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-25 04:33:49.559473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:49.559486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:49.559497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:49.559516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:49.559527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:49.559543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:49.559556 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:49.559577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:52.650162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:52.650267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:52.650305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:52.650319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:52.650330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 04:33:52.650343 | orchestrator | 2026-03-25 04:33:52.650355 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-25 04:33:52.650367 | orchestrator | Wednesday 25 March 2026 04:33:49 +0000 (0:00:04.590) 0:01:10.428 ******* 2026-03-25 04:33:52.650379 | orchestrator | changed: [testbed-manager] => { 2026-03-25 04:33:52.650392 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:33:52.650403 | orchestrator | } 2026-03-25 04:33:52.650414 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 04:33:52.650425 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:33:52.650436 | orchestrator | } 2026-03-25 04:33:52.650447 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 04:33:52.650458 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:33:52.650469 | orchestrator | } 2026-03-25 04:33:52.650479 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 04:33:52.650490 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:33:52.650501 | orchestrator | } 2026-03-25 04:33:52.650512 | orchestrator | changed: [testbed-node-3] => { 2026-03-25 04:33:52.650523 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:33:52.650534 | orchestrator | } 2026-03-25 04:33:52.650544 | orchestrator | changed: [testbed-node-4] => { 2026-03-25 04:33:52.650555 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:33:52.650566 | orchestrator | } 2026-03-25 04:33:52.650577 | orchestrator | changed: [testbed-node-5] => { 2026-03-25 04:33:52.650587 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:33:52.650598 | orchestrator | } 2026-03-25 04:33:52.650609 | orchestrator | 2026-03-25 04:33:52.650620 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 04:33:52.650631 | orchestrator | Wednesday 25 March 2026 04:33:51 +0000 (0:00:02.381) 0:01:12.810 ******* 2026-03-25 04:33:52.650644 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:52.650701 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:52.650716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:52.650730 | orchestrator | skipping: [testbed-manager] 2026-03-25 04:33:52.650743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:52.650767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:52.650782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:52.650795 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:33:52.650813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:33:52.650827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:52.650847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:33:52.650894 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:34:38.402207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:34:38.402330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:34:38.402346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:34:38.402360 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:34:38.402372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:34:38.402403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:34:38.402415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:34:38.402445 | orchestrator | skipping: [testbed-node-3] 2026-03-25 04:34:38.402457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:34:38.402486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:34:38.402497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:34:38.402507 | orchestrator | skipping: [testbed-node-4] 2026-03-25 04:34:38.402517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-25 04:34:38.402527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:34:38.402537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:34:38.402547 | orchestrator | skipping: [testbed-node-5] 2026-03-25 04:34:38.402557 | orchestrator | 2026-03-25 04:34:38.402572 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:34:38.402584 | orchestrator | Wednesday 25 March 2026 04:33:55 +0000 (0:00:03.102) 0:01:15.912 ******* 2026-03-25 04:34:38.402593 | orchestrator | 2026-03-25 04:34:38.402603 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:34:38.402620 | orchestrator | Wednesday 25 March 2026 04:33:55 +0000 (0:00:00.501) 0:01:16.413 ******* 2026-03-25 04:34:38.402629 | orchestrator | 2026-03-25 04:34:38.402639 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:34:38.402648 | orchestrator | Wednesday 25 March 2026 04:33:55 +0000 (0:00:00.455) 0:01:16.869 ******* 2026-03-25 04:34:38.402658 | orchestrator | 2026-03-25 04:34:38.402667 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:34:38.402677 | orchestrator | Wednesday 25 March 2026 04:33:56 +0000 (0:00:00.547) 0:01:17.416 ******* 2026-03-25 04:34:38.402686 | orchestrator | 2026-03-25 04:34:38.402695 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:34:38.402705 | orchestrator | Wednesday 25 March 2026 04:33:56 +0000 (0:00:00.453) 0:01:17.869 ******* 2026-03-25 04:34:38.402715 | orchestrator | 2026-03-25 04:34:38.402724 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:34:38.402733 | orchestrator | Wednesday 25 March 2026 04:33:57 +0000 (0:00:00.738) 0:01:18.608 ******* 2026-03-25 04:34:38.402743 | orchestrator | 2026-03-25 04:34:38.402752 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-25 04:34:38.402761 | orchestrator | Wednesday 25 March 2026 04:33:58 +0000 (0:00:00.449) 0:01:19.057 ******* 2026-03-25 04:34:38.402771 | orchestrator | 2026-03-25 04:34:38.402780 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-25 04:34:38.402789 | orchestrator | Wednesday 25 March 2026 04:33:59 +0000 (0:00:00.885) 0:01:19.943 ******* 2026-03-25 04:34:38.402799 | orchestrator | changed: [testbed-manager] 2026-03-25 04:34:38.402808 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:34:38.402818 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:34:38.402827 | orchestrator | changed: [testbed-node-5] 2026-03-25 04:34:38.402837 | orchestrator | changed: [testbed-node-4] 2026-03-25 04:34:38.402846 | orchestrator | changed: [testbed-node-3] 2026-03-25 04:34:38.402861 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:35:33.265032 | orchestrator | 2026-03-25 04:35:33.265199 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-25 04:35:33.265231 | orchestrator | Wednesday 25 March 2026 04:34:38 +0000 (0:00:39.322) 0:01:59.266 ******* 2026-03-25 04:35:33.265251 | orchestrator | changed: [testbed-manager] 2026-03-25 04:35:33.265271 | orchestrator | changed: [testbed-node-3] 2026-03-25 04:35:33.265289 | orchestrator | changed: [testbed-node-5] 2026-03-25 04:35:33.265307 | orchestrator | changed: [testbed-node-4] 2026-03-25 04:35:33.265326 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:35:33.265344 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:35:33.265362 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:35:33.265381 | orchestrator | 2026-03-25 04:35:33.265398 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-25 04:35:33.265416 | orchestrator | Wednesday 25 March 2026 04:35:17 +0000 (0:00:39.118) 0:02:38.385 ******* 2026-03-25 04:35:33.265435 | orchestrator | ok: [testbed-manager] 2026-03-25 04:35:33.265456 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:35:33.265474 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:35:33.265492 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:35:33.265510 | orchestrator | ok: [testbed-node-3] 2026-03-25 04:35:33.265528 | orchestrator | ok: [testbed-node-4] 2026-03-25 04:35:33.265546 | orchestrator | ok: [testbed-node-5] 2026-03-25 04:35:33.265567 | orchestrator | 2026-03-25 04:35:33.265589 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-25 04:35:33.265610 | orchestrator | Wednesday 25 March 2026 04:35:20 +0000 (0:00:03.031) 0:02:41.417 ******* 2026-03-25 04:35:33.265630 | orchestrator | changed: [testbed-manager] 2026-03-25 04:35:33.265650 | orchestrator | changed: [testbed-node-3] 2026-03-25 04:35:33.265671 | orchestrator | changed: [testbed-node-4] 2026-03-25 04:35:33.265691 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:35:33.265713 | orchestrator | changed: [testbed-node-5] 2026-03-25 04:35:33.265771 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:35:33.265794 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:35:33.265816 | orchestrator | 2026-03-25 04:35:33.265838 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:35:33.265861 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 04:35:33.265885 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 04:35:33.265907 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 04:35:33.265928 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 04:35:33.265947 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 04:35:33.265966 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 04:35:33.265984 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 04:35:33.266002 | orchestrator | 2026-03-25 04:35:33.266108 | orchestrator | 2026-03-25 04:35:33.266131 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:35:33.266257 | orchestrator | Wednesday 25 March 2026 04:35:32 +0000 (0:00:12.190) 0:02:53.607 ******* 2026-03-25 04:35:33.266281 | orchestrator | =============================================================================== 2026-03-25 04:35:33.266387 | orchestrator | common : Restart fluentd container ------------------------------------- 39.32s 2026-03-25 04:35:33.266411 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 39.12s 2026-03-25 04:35:33.266428 | orchestrator | common : Restart cron container ---------------------------------------- 12.19s 2026-03-25 04:35:33.266439 | orchestrator | common : Copying over config.json files for services -------------------- 4.68s 2026-03-25 04:35:33.266449 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.59s 2026-03-25 04:35:33.266460 | orchestrator | service-check-containers : common | Check containers -------------------- 4.59s 2026-03-25 04:35:33.266471 | orchestrator | common : Flush handlers ------------------------------------------------- 4.03s 2026-03-25 04:35:33.266481 | orchestrator | common : include_tasks -------------------------------------------------- 3.88s 2026-03-25 04:35:33.266492 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.85s 2026-03-25 04:35:33.266502 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.83s 2026-03-25 04:35:33.266512 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.53s 2026-03-25 04:35:33.266523 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.30s 2026-03-25 04:35:33.266535 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.20s 2026-03-25 04:35:33.266553 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.19s 2026-03-25 04:35:33.266569 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.14s 2026-03-25 04:35:33.266588 | orchestrator | common : include_tasks -------------------------------------------------- 3.12s 2026-03-25 04:35:33.266605 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.11s 2026-03-25 04:35:33.266650 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.10s 2026-03-25 04:35:33.266670 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.03s 2026-03-25 04:35:33.266690 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.90s 2026-03-25 04:35:33.597486 | orchestrator | + osism apply -a upgrade loadbalancer 2026-03-25 04:35:35.639491 | orchestrator | 2026-03-25 04:35:35 | INFO  | Task 2f3d5dcc-9b61-4f7e-b006-c62c604d7cc9 (loadbalancer) was prepared for execution. 2026-03-25 04:35:35.639557 | orchestrator | 2026-03-25 04:35:35 | INFO  | It takes a moment until task 2f3d5dcc-9b61-4f7e-b006-c62c604d7cc9 (loadbalancer) has been started and output is visible here. 2026-03-25 04:36:10.427503 | orchestrator | 2026-03-25 04:36:10.427654 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 04:36:10.427671 | orchestrator | 2026-03-25 04:36:10.427684 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 04:36:10.427695 | orchestrator | Wednesday 25 March 2026 04:35:41 +0000 (0:00:01.677) 0:00:01.677 ******* 2026-03-25 04:36:10.427706 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:36:10.427719 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:36:10.427730 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:36:10.427742 | orchestrator | 2026-03-25 04:36:10.427753 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 04:36:10.427764 | orchestrator | Wednesday 25 March 2026 04:35:43 +0000 (0:00:01.796) 0:00:03.473 ******* 2026-03-25 04:36:10.427776 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-25 04:36:10.427787 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-25 04:36:10.427798 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-25 04:36:10.427809 | orchestrator | 2026-03-25 04:36:10.427820 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-25 04:36:10.427835 | orchestrator | 2026-03-25 04:36:10.427855 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-25 04:36:10.427877 | orchestrator | Wednesday 25 March 2026 04:35:45 +0000 (0:00:02.084) 0:00:05.557 ******* 2026-03-25 04:36:10.427899 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:36:10.427922 | orchestrator | 2026-03-25 04:36:10.427944 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-03-25 04:36:10.427964 | orchestrator | Wednesday 25 March 2026 04:35:47 +0000 (0:00:01.985) 0:00:07.543 ******* 2026-03-25 04:36:10.427978 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:36:10.427992 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:36:10.428004 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:36:10.428016 | orchestrator | 2026-03-25 04:36:10.428028 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-03-25 04:36:10.428041 | orchestrator | Wednesday 25 March 2026 04:35:49 +0000 (0:00:02.255) 0:00:09.799 ******* 2026-03-25 04:36:10.428053 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:36:10.428066 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:36:10.428078 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:36:10.428090 | orchestrator | 2026-03-25 04:36:10.428102 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-25 04:36:10.428115 | orchestrator | Wednesday 25 March 2026 04:35:52 +0000 (0:00:02.376) 0:00:12.176 ******* 2026-03-25 04:36:10.428127 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:36:10.428139 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:36:10.428150 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:36:10.428161 | orchestrator | 2026-03-25 04:36:10.428172 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-25 04:36:10.428204 | orchestrator | Wednesday 25 March 2026 04:35:54 +0000 (0:00:01.978) 0:00:14.155 ******* 2026-03-25 04:36:10.428216 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:36:10.428227 | orchestrator | 2026-03-25 04:36:10.428238 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-25 04:36:10.428249 | orchestrator | Wednesday 25 March 2026 04:35:56 +0000 (0:00:02.082) 0:00:16.238 ******* 2026-03-25 04:36:10.428309 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:36:10.428321 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:36:10.428332 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:36:10.428342 | orchestrator | 2026-03-25 04:36:10.428353 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-25 04:36:10.428364 | orchestrator | Wednesday 25 March 2026 04:35:58 +0000 (0:00:01.861) 0:00:18.099 ******* 2026-03-25 04:36:10.428376 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-25 04:36:10.428387 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-25 04:36:10.428397 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-25 04:36:10.428408 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-25 04:36:10.428418 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-25 04:36:10.428429 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-25 04:36:10.428440 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-25 04:36:10.428452 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-25 04:36:10.428462 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-25 04:36:10.428473 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-25 04:36:10.428484 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-25 04:36:10.428494 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-25 04:36:10.428505 | orchestrator | 2026-03-25 04:36:10.428516 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-25 04:36:10.428526 | orchestrator | Wednesday 25 March 2026 04:36:01 +0000 (0:00:03.375) 0:00:21.474 ******* 2026-03-25 04:36:10.428537 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-25 04:36:10.428548 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-25 04:36:10.428559 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-25 04:36:10.428570 | orchestrator | 2026-03-25 04:36:10.428581 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-25 04:36:10.428614 | orchestrator | Wednesday 25 March 2026 04:36:03 +0000 (0:00:01.961) 0:00:23.436 ******* 2026-03-25 04:36:10.428626 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-25 04:36:10.428637 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-25 04:36:10.428647 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-25 04:36:10.428658 | orchestrator | 2026-03-25 04:36:10.428669 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-25 04:36:10.428679 | orchestrator | Wednesday 25 March 2026 04:36:05 +0000 (0:00:02.234) 0:00:25.671 ******* 2026-03-25 04:36:10.428690 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-25 04:36:10.428701 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:36:10.428712 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-25 04:36:10.428722 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:36:10.428733 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-25 04:36:10.428744 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:36:10.428754 | orchestrator | 2026-03-25 04:36:10.428765 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-25 04:36:10.428776 | orchestrator | Wednesday 25 March 2026 04:36:07 +0000 (0:00:01.852) 0:00:27.523 ******* 2026-03-25 04:36:10.428791 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 04:36:10.428825 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 04:36:10.428837 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 04:36:10.428848 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:36:10.428860 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:36:10.428879 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:36:21.427944 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:36:21.428066 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:36:21.428089 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:36:21.428097 | orchestrator | 2026-03-25 04:36:21.428106 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-25 04:36:21.428114 | orchestrator | Wednesday 25 March 2026 04:36:10 +0000 (0:00:02.746) 0:00:30.270 ******* 2026-03-25 04:36:21.428121 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:36:21.428129 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:36:21.428135 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:36:21.428142 | orchestrator | 2026-03-25 04:36:21.428149 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-25 04:36:21.428156 | orchestrator | Wednesday 25 March 2026 04:36:12 +0000 (0:00:01.973) 0:00:32.244 ******* 2026-03-25 04:36:21.428162 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-03-25 04:36:21.428170 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-03-25 04:36:21.428177 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-03-25 04:36:21.428183 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-03-25 04:36:21.428190 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-03-25 04:36:21.428196 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-03-25 04:36:21.428203 | orchestrator | 2026-03-25 04:36:21.428210 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-25 04:36:21.428216 | orchestrator | Wednesday 25 March 2026 04:36:15 +0000 (0:00:02.833) 0:00:35.078 ******* 2026-03-25 04:36:21.428223 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:36:21.428229 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:36:21.428236 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:36:21.428242 | orchestrator | 2026-03-25 04:36:21.428249 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-25 04:36:21.428256 | orchestrator | Wednesday 25 March 2026 04:36:17 +0000 (0:00:02.287) 0:00:37.365 ******* 2026-03-25 04:36:21.428262 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:36:21.428269 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:36:21.428275 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:36:21.428282 | orchestrator | 2026-03-25 04:36:21.428350 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-25 04:36:21.428358 | orchestrator | Wednesday 25 March 2026 04:36:19 +0000 (0:00:02.225) 0:00:39.590 ******* 2026-03-25 04:36:21.428376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 04:36:21.428405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:36:21.428413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:36:21.428426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 04:36:21.428434 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:36:21.428441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 04:36:21.428449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:36:21.428456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:36:21.428468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 04:36:21.428475 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:36:21.428489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 04:36:25.515092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:36:25.515230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:36:25.515259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 04:36:25.515283 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:36:25.515371 | orchestrator | 2026-03-25 04:36:25.515393 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-25 04:36:25.515406 | orchestrator | Wednesday 25 March 2026 04:36:21 +0000 (0:00:01.683) 0:00:41.274 ******* 2026-03-25 04:36:25.515418 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 04:36:25.515472 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 04:36:25.515485 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 04:36:25.515517 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:36:25.515534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:36:25.515546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 04:36:25.515558 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:36:25.515577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:36:25.515588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 04:36:25.515608 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:36:39.378898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:36:39.379016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b', '__omit_place_holder__85014f3693ea4e3b2b50bfb10c9e5c2a581cb31b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-25 04:36:39.379033 | orchestrator | 2026-03-25 04:36:39.379046 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-25 04:36:39.379059 | orchestrator | Wednesday 25 March 2026 04:36:25 +0000 (0:00:04.088) 0:00:45.363 ******* 2026-03-25 04:36:39.379071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 04:36:39.379106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 04:36:39.379119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 04:36:39.379130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:36:39.379166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:36:39.379179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:36:39.379191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:36:39.379211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:36:39.379223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:36:39.379234 | orchestrator | 2026-03-25 04:36:39.379245 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-25 04:36:39.379256 | orchestrator | Wednesday 25 March 2026 04:36:30 +0000 (0:00:04.850) 0:00:50.214 ******* 2026-03-25 04:36:39.379267 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-25 04:36:39.379279 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-25 04:36:39.379290 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-25 04:36:39.379301 | orchestrator | 2026-03-25 04:36:39.379312 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-25 04:36:39.379322 | orchestrator | Wednesday 25 March 2026 04:36:33 +0000 (0:00:02.715) 0:00:52.930 ******* 2026-03-25 04:36:39.379356 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-25 04:36:39.379368 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-25 04:36:39.379380 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-25 04:36:39.379399 | orchestrator | 2026-03-25 04:36:39.379417 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-25 04:36:39.379436 | orchestrator | Wednesday 25 March 2026 04:36:37 +0000 (0:00:04.349) 0:00:57.280 ******* 2026-03-25 04:36:39.379458 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:36:39.379478 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:36:39.379504 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:00.039892 | orchestrator | 2026-03-25 04:37:00.040020 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-25 04:37:00.040044 | orchestrator | Wednesday 25 March 2026 04:36:39 +0000 (0:00:01.942) 0:00:59.223 ******* 2026-03-25 04:37:00.040056 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-25 04:37:00.040068 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-25 04:37:00.040098 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-25 04:37:00.040134 | orchestrator | 2026-03-25 04:37:00.040146 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-25 04:37:00.040158 | orchestrator | Wednesday 25 March 2026 04:36:42 +0000 (0:00:03.071) 0:01:02.295 ******* 2026-03-25 04:37:00.040168 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-25 04:37:00.040181 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-25 04:37:00.040192 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-25 04:37:00.040203 | orchestrator | 2026-03-25 04:37:00.040214 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-25 04:37:00.040224 | orchestrator | Wednesday 25 March 2026 04:36:45 +0000 (0:00:02.749) 0:01:05.044 ******* 2026-03-25 04:37:00.040235 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:37:00.040246 | orchestrator | 2026-03-25 04:37:00.040256 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-25 04:37:00.040268 | orchestrator | Wednesday 25 March 2026 04:36:47 +0000 (0:00:02.014) 0:01:07.058 ******* 2026-03-25 04:37:00.040280 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-03-25 04:37:00.040293 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-03-25 04:37:00.040304 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-03-25 04:37:00.040315 | orchestrator | 2026-03-25 04:37:00.040326 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-25 04:37:00.040338 | orchestrator | Wednesday 25 March 2026 04:36:49 +0000 (0:00:02.618) 0:01:09.677 ******* 2026-03-25 04:37:00.040349 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-25 04:37:00.040361 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-25 04:37:00.040373 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-25 04:37:00.040408 | orchestrator | 2026-03-25 04:37:00.040415 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-25 04:37:00.040423 | orchestrator | Wednesday 25 March 2026 04:36:52 +0000 (0:00:02.636) 0:01:12.314 ******* 2026-03-25 04:37:00.040431 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:00.040441 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:00.040450 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:00.040457 | orchestrator | 2026-03-25 04:37:00.040466 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-25 04:37:00.040474 | orchestrator | Wednesday 25 March 2026 04:36:53 +0000 (0:00:01.349) 0:01:13.663 ******* 2026-03-25 04:37:00.040482 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:00.040489 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:00.040497 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:00.040505 | orchestrator | 2026-03-25 04:37:00.040513 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-25 04:37:00.040521 | orchestrator | Wednesday 25 March 2026 04:36:55 +0000 (0:00:02.045) 0:01:15.709 ******* 2026-03-25 04:37:00.040532 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 04:37:00.040543 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 04:37:00.040586 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 04:37:00.040596 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:37:00.040604 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:37:00.040613 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:37:00.040622 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:37:00.040631 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:37:00.040650 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:37:04.338525 | orchestrator | 2026-03-25 04:37:04.338639 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-25 04:37:04.338657 | orchestrator | Wednesday 25 March 2026 04:37:00 +0000 (0:00:04.171) 0:01:19.881 ******* 2026-03-25 04:37:04.338689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 04:37:04.338706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:37:04.338719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:37:04.338731 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:04.338744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 04:37:04.338755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:37:04.338796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:37:04.338824 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:04.338881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 04:37:04.338901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:37:04.338920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:37:04.338938 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:04.338954 | orchestrator | 2026-03-25 04:37:04.338969 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-25 04:37:04.338985 | orchestrator | Wednesday 25 March 2026 04:37:02 +0000 (0:00:02.047) 0:01:21.929 ******* 2026-03-25 04:37:04.339003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 04:37:04.339022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:37:04.339056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:37:04.339075 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:04.339109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 04:37:15.640057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:37:15.640176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:37:15.640194 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:15.640209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 04:37:15.640222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:37:15.640256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:37:15.640268 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:15.640279 | orchestrator | 2026-03-25 04:37:15.640291 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-25 04:37:15.640304 | orchestrator | Wednesday 25 March 2026 04:37:04 +0000 (0:00:02.257) 0:01:24.187 ******* 2026-03-25 04:37:15.640315 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-25 04:37:15.640327 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-25 04:37:15.640338 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-25 04:37:15.640349 | orchestrator | 2026-03-25 04:37:15.640360 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-25 04:37:15.640371 | orchestrator | Wednesday 25 March 2026 04:37:06 +0000 (0:00:02.430) 0:01:26.618 ******* 2026-03-25 04:37:15.640381 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-25 04:37:15.640392 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-25 04:37:15.640403 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-25 04:37:15.640482 | orchestrator | 2026-03-25 04:37:15.640519 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-25 04:37:15.640531 | orchestrator | Wednesday 25 March 2026 04:37:09 +0000 (0:00:02.437) 0:01:29.056 ******* 2026-03-25 04:37:15.640542 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-25 04:37:15.640553 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-25 04:37:15.640564 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-25 04:37:15.640574 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:15.640585 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-25 04:37:15.640595 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-25 04:37:15.640606 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:15.640616 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-25 04:37:15.640627 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:15.640638 | orchestrator | 2026-03-25 04:37:15.640648 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-25 04:37:15.640659 | orchestrator | Wednesday 25 March 2026 04:37:11 +0000 (0:00:02.503) 0:01:31.559 ******* 2026-03-25 04:37:15.640670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 04:37:15.640691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 04:37:15.640702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 04:37:15.640714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:37:15.640734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:37:19.443818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:37:19.443947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:37:19.444008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:37:19.444051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:37:19.444070 | orchestrator | 2026-03-25 04:37:19.444091 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-25 04:37:19.444110 | orchestrator | Wednesday 25 March 2026 04:37:15 +0000 (0:00:03.928) 0:01:35.487 ******* 2026-03-25 04:37:19.444130 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 04:37:19.444149 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:37:19.444168 | orchestrator | } 2026-03-25 04:37:19.444186 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 04:37:19.444203 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:37:19.444221 | orchestrator | } 2026-03-25 04:37:19.444239 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 04:37:19.444257 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:37:19.444275 | orchestrator | } 2026-03-25 04:37:19.444292 | orchestrator | 2026-03-25 04:37:19.444310 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 04:37:19.444329 | orchestrator | Wednesday 25 March 2026 04:37:17 +0000 (0:00:01.496) 0:01:36.983 ******* 2026-03-25 04:37:19.444349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 04:37:19.444402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:37:19.444455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:37:19.444493 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:19.444515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 04:37:19.444535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:37:19.444571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:37:19.444591 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:19.444611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 04:37:19.444633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:37:19.444676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:37:25.028199 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:25.028294 | orchestrator | 2026-03-25 04:37:25.028305 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-25 04:37:25.028315 | orchestrator | Wednesday 25 March 2026 04:37:19 +0000 (0:00:02.301) 0:01:39.285 ******* 2026-03-25 04:37:25.028322 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:37:25.028329 | orchestrator | 2026-03-25 04:37:25.028337 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-25 04:37:25.028344 | orchestrator | Wednesday 25 March 2026 04:37:21 +0000 (0:00:02.098) 0:01:41.383 ******* 2026-03-25 04:37:25.028356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:37:25.028368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 04:37:25.028377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:25.028385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 04:37:25.028421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:37:25.028494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 04:37:25.028504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:25.028512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:37:25.028520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 04:37:25.028528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 04:37:25.028551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:26.743844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 04:37:26.743953 | orchestrator | 2026-03-25 04:37:26.743970 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-25 04:37:26.743983 | orchestrator | Wednesday 25 March 2026 04:37:26 +0000 (0:00:04.580) 0:01:45.964 ******* 2026-03-25 04:37:26.743997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:37:26.744013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 04:37:26.744026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:26.744179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 04:37:26.744196 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:26.744229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:37:26.744242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 04:37:26.744254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:26.744265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 04:37:26.744276 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:26.744288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:37:26.744314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-25 04:37:26.744334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:41.512372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-25 04:37:41.512514 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:41.512535 | orchestrator | 2026-03-25 04:37:41.512548 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-25 04:37:41.512560 | orchestrator | Wednesday 25 March 2026 04:37:27 +0000 (0:00:01.734) 0:01:47.698 ******* 2026-03-25 04:37:41.512572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:41.512587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:41.512600 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:41.512611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:41.512622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:41.512633 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:41.512669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:41.512681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:41.512692 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:41.512703 | orchestrator | 2026-03-25 04:37:41.512715 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-25 04:37:41.512726 | orchestrator | Wednesday 25 March 2026 04:37:30 +0000 (0:00:02.296) 0:01:49.995 ******* 2026-03-25 04:37:41.512736 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:37:41.512748 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:37:41.512759 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:37:41.512770 | orchestrator | 2026-03-25 04:37:41.512781 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-25 04:37:41.512792 | orchestrator | Wednesday 25 March 2026 04:37:32 +0000 (0:00:02.231) 0:01:52.226 ******* 2026-03-25 04:37:41.512803 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:37:41.512813 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:37:41.512824 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:37:41.512834 | orchestrator | 2026-03-25 04:37:41.512860 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-25 04:37:41.512871 | orchestrator | Wednesday 25 March 2026 04:37:35 +0000 (0:00:02.792) 0:01:55.019 ******* 2026-03-25 04:37:41.512882 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:37:41.512893 | orchestrator | 2026-03-25 04:37:41.512905 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-25 04:37:41.512917 | orchestrator | Wednesday 25 March 2026 04:37:36 +0000 (0:00:01.686) 0:01:56.705 ******* 2026-03-25 04:37:41.512954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:37:41.512971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:41.512986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:37:41.513007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:37:41.513026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:41.513039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:37:41.513061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:37:43.158369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:43.158549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:37:43.158571 | orchestrator | 2026-03-25 04:37:43.158584 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-25 04:37:43.158596 | orchestrator | Wednesday 25 March 2026 04:37:41 +0000 (0:00:04.651) 0:02:01.357 ******* 2026-03-25 04:37:43.158627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:37:43.158642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:43.158654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:37:43.158666 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:43.158712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:37:43.158734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:43.158762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:37:43.158781 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:43.158801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:37:43.158824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-25 04:37:43.158864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:37:59.647959 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:59.648097 | orchestrator | 2026-03-25 04:37:59.648116 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-25 04:37:59.648127 | orchestrator | Wednesday 25 March 2026 04:37:43 +0000 (0:00:01.648) 0:02:03.005 ******* 2026-03-25 04:37:59.648139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:59.648153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:59.648165 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:59.648175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:59.648185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:59.648195 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:59.648205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:59.648216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:37:59.648226 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:59.648236 | orchestrator | 2026-03-25 04:37:59.648246 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-25 04:37:59.648256 | orchestrator | Wednesday 25 March 2026 04:37:45 +0000 (0:00:01.898) 0:02:04.904 ******* 2026-03-25 04:37:59.648266 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:37:59.648277 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:37:59.648287 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:37:59.648296 | orchestrator | 2026-03-25 04:37:59.648306 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-25 04:37:59.648316 | orchestrator | Wednesday 25 March 2026 04:37:47 +0000 (0:00:02.264) 0:02:07.168 ******* 2026-03-25 04:37:59.648326 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:37:59.648335 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:37:59.648345 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:37:59.648355 | orchestrator | 2026-03-25 04:37:59.648364 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-25 04:37:59.648396 | orchestrator | Wednesday 25 March 2026 04:37:50 +0000 (0:00:02.919) 0:02:10.088 ******* 2026-03-25 04:37:59.648407 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:59.648416 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:59.648426 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:37:59.648436 | orchestrator | 2026-03-25 04:37:59.648445 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-25 04:37:59.648455 | orchestrator | Wednesday 25 March 2026 04:37:51 +0000 (0:00:01.369) 0:02:11.458 ******* 2026-03-25 04:37:59.648465 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:37:59.648476 | orchestrator | 2026-03-25 04:37:59.648487 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-25 04:37:59.648498 | orchestrator | Wednesday 25 March 2026 04:37:53 +0000 (0:00:01.716) 0:02:13.174 ******* 2026-03-25 04:37:59.648511 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-25 04:37:59.648587 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-25 04:37:59.648602 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-25 04:37:59.648614 | orchestrator | 2026-03-25 04:37:59.648630 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-25 04:37:59.648642 | orchestrator | Wednesday 25 March 2026 04:37:56 +0000 (0:00:03.676) 0:02:16.851 ******* 2026-03-25 04:37:59.648654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-25 04:37:59.648674 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:37:59.648686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-25 04:37:59.648698 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:37:59.648717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-25 04:38:12.112517 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:12.112729 | orchestrator | 2026-03-25 04:38:12.112760 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-25 04:38:12.112780 | orchestrator | Wednesday 25 March 2026 04:37:59 +0000 (0:00:02.648) 0:02:19.499 ******* 2026-03-25 04:38:12.112802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 04:38:12.112823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 04:38:12.112845 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:12.112993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 04:38:12.113060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 04:38:12.113076 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:12.113089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 04:38:12.113104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-25 04:38:12.113116 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:12.113129 | orchestrator | 2026-03-25 04:38:12.113142 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-25 04:38:12.113154 | orchestrator | Wednesday 25 March 2026 04:38:02 +0000 (0:00:02.990) 0:02:22.489 ******* 2026-03-25 04:38:12.113166 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:12.113179 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:12.113192 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:12.113204 | orchestrator | 2026-03-25 04:38:12.113217 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-25 04:38:12.113233 | orchestrator | Wednesday 25 March 2026 04:38:04 +0000 (0:00:01.511) 0:02:24.001 ******* 2026-03-25 04:38:12.113252 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:12.113270 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:12.113288 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:12.113307 | orchestrator | 2026-03-25 04:38:12.113324 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-25 04:38:12.113343 | orchestrator | Wednesday 25 March 2026 04:38:06 +0000 (0:00:02.444) 0:02:26.446 ******* 2026-03-25 04:38:12.113362 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:38:12.113381 | orchestrator | 2026-03-25 04:38:12.113401 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-25 04:38:12.113420 | orchestrator | Wednesday 25 March 2026 04:38:08 +0000 (0:00:01.841) 0:02:28.287 ******* 2026-03-25 04:38:12.113470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:38:12.113499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:38:12.113521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:38:12.113534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 04:38:12.113546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:38:12.113565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 04:38:14.208460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 04:38:14.208610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:38:14.208699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 04:38:14.208715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:38:14.208727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 04:38:14.208759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 04:38:14.208782 | orchestrator | 2026-03-25 04:38:14.208795 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-25 04:38:14.208807 | orchestrator | Wednesday 25 March 2026 04:38:13 +0000 (0:00:04.806) 0:02:33.094 ******* 2026-03-25 04:38:14.208826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:38:14.208839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:38:14.208850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 04:38:14.208862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 04:38:14.208881 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:14.208904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:38:25.765588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:38:25.765746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 04:38:25.765761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 04:38:25.765772 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:25.765785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:38:25.765819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:38:25.765851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-25 04:38:25.765862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-25 04:38:25.765872 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:25.765884 | orchestrator | 2026-03-25 04:38:25.765901 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-25 04:38:25.765912 | orchestrator | Wednesday 25 March 2026 04:38:15 +0000 (0:00:02.086) 0:02:35.180 ******* 2026-03-25 04:38:25.765922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:25.765932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:25.765943 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:25.765952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:25.765961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:25.765978 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:25.765987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:25.766000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:25.766014 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:25.766091 | orchestrator | 2026-03-25 04:38:25.766108 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-25 04:38:25.766119 | orchestrator | Wednesday 25 March 2026 04:38:17 +0000 (0:00:02.078) 0:02:37.259 ******* 2026-03-25 04:38:25.766129 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:38:25.766139 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:38:25.766149 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:38:25.766159 | orchestrator | 2026-03-25 04:38:25.766169 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-25 04:38:25.766178 | orchestrator | Wednesday 25 March 2026 04:38:19 +0000 (0:00:02.303) 0:02:39.562 ******* 2026-03-25 04:38:25.766189 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:38:25.766199 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:38:25.766209 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:38:25.766219 | orchestrator | 2026-03-25 04:38:25.766228 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-25 04:38:25.766238 | orchestrator | Wednesday 25 March 2026 04:38:22 +0000 (0:00:03.008) 0:02:42.570 ******* 2026-03-25 04:38:25.766249 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:25.766259 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:25.766268 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:25.766278 | orchestrator | 2026-03-25 04:38:25.766287 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-25 04:38:25.766296 | orchestrator | Wednesday 25 March 2026 04:38:24 +0000 (0:00:01.660) 0:02:44.231 ******* 2026-03-25 04:38:25.766304 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:25.766313 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:25.766329 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:31.314582 | orchestrator | 2026-03-25 04:38:31.314740 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-25 04:38:31.314774 | orchestrator | Wednesday 25 March 2026 04:38:25 +0000 (0:00:01.386) 0:02:45.618 ******* 2026-03-25 04:38:31.314787 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:38:31.314798 | orchestrator | 2026-03-25 04:38:31.314809 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-25 04:38:31.314820 | orchestrator | Wednesday 25 March 2026 04:38:27 +0000 (0:00:01.795) 0:02:47.413 ******* 2026-03-25 04:38:31.314837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:38:31.314878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 04:38:31.314892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 04:38:31.314904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 04:38:31.314916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 04:38:31.314952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:38:31.314965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 04:38:31.314977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:38:31.314997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 04:38:31.315009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 04:38:31.315020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 04:38:31.315044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 04:38:33.331098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:38:33.331229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 04:38:33.331247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:38:33.331264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 04:38:33.331276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 04:38:33.331321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 04:38:33.331335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 04:38:33.331355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:38:33.331366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 04:38:33.331378 | orchestrator | 2026-03-25 04:38:33.331391 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-25 04:38:33.331403 | orchestrator | Wednesday 25 March 2026 04:38:32 +0000 (0:00:05.110) 0:02:52.524 ******* 2026-03-25 04:38:33.331415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:38:33.331433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 04:38:33.331454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.692172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.692276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.692292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.692323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.692337 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:34.692364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:38:34.693191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 04:38:34.693226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.693238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.693249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.693261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.693272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 04:38:34.693283 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:34.693325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:38:50.200564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-25 04:38:50.200671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-25 04:38:50.200685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-25 04:38:50.200694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-25 04:38:50.200704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:38:50.200804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-25 04:38:50.200818 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:50.200829 | orchestrator | 2026-03-25 04:38:50.200839 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-25 04:38:50.200848 | orchestrator | Wednesday 25 March 2026 04:38:34 +0000 (0:00:02.019) 0:02:54.543 ******* 2026-03-25 04:38:50.200871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:50.200884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:50.200895 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:50.200904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:50.200913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:50.200921 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:50.200930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:50.200939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:38:50.200947 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:50.200956 | orchestrator | 2026-03-25 04:38:50.200965 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-25 04:38:50.200973 | orchestrator | Wednesday 25 March 2026 04:38:36 +0000 (0:00:02.123) 0:02:56.667 ******* 2026-03-25 04:38:50.200982 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:38:50.200992 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:38:50.201001 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:38:50.201009 | orchestrator | 2026-03-25 04:38:50.201018 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-25 04:38:50.201026 | orchestrator | Wednesday 25 March 2026 04:38:39 +0000 (0:00:02.366) 0:02:59.034 ******* 2026-03-25 04:38:50.201034 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:38:50.201043 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:38:50.201052 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:38:50.201060 | orchestrator | 2026-03-25 04:38:50.201068 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-25 04:38:50.201084 | orchestrator | Wednesday 25 March 2026 04:38:42 +0000 (0:00:02.954) 0:03:01.988 ******* 2026-03-25 04:38:50.201093 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:50.201101 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:50.201112 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:38:50.201121 | orchestrator | 2026-03-25 04:38:50.201131 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-25 04:38:50.201141 | orchestrator | Wednesday 25 March 2026 04:38:43 +0000 (0:00:01.399) 0:03:03.388 ******* 2026-03-25 04:38:50.201150 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:38:50.201160 | orchestrator | 2026-03-25 04:38:50.201170 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-25 04:38:50.201178 | orchestrator | Wednesday 25 March 2026 04:38:45 +0000 (0:00:02.015) 0:03:05.404 ******* 2026-03-25 04:38:50.201201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 04:38:51.357335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 04:38:51.357472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 04:38:51.357509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 04:38:51.357534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-25 04:38:51.357554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 04:38:54.815186 | orchestrator | 2026-03-25 04:38:54.815285 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-25 04:38:54.815299 | orchestrator | Wednesday 25 March 2026 04:38:51 +0000 (0:00:05.809) 0:03:11.213 ******* 2026-03-25 04:38:54.815331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 04:38:54.815347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 04:38:54.815359 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:38:54.815412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 04:38:54.815430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 04:38:54.815442 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:38:54.815461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-25 04:39:14.281413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-25 04:39:14.281497 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:14.281505 | orchestrator | 2026-03-25 04:39:14.281511 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-25 04:39:14.281516 | orchestrator | Wednesday 25 March 2026 04:38:55 +0000 (0:00:04.603) 0:03:15.817 ******* 2026-03-25 04:39:14.281522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 04:39:14.281542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 04:39:14.281547 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:14.281552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 04:39:14.281567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 04:39:14.281572 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:14.281580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 04:39:14.281584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-25 04:39:14.281589 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:14.281593 | orchestrator | 2026-03-25 04:39:14.281597 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-25 04:39:14.281602 | orchestrator | Wednesday 25 March 2026 04:39:01 +0000 (0:00:05.104) 0:03:20.922 ******* 2026-03-25 04:39:14.281606 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:39:14.281611 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:39:14.281615 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:39:14.281619 | orchestrator | 2026-03-25 04:39:14.281623 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-25 04:39:14.281627 | orchestrator | Wednesday 25 March 2026 04:39:03 +0000 (0:00:02.206) 0:03:23.129 ******* 2026-03-25 04:39:14.281635 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:39:14.281639 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:39:14.281643 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:39:14.281647 | orchestrator | 2026-03-25 04:39:14.281652 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-25 04:39:14.281656 | orchestrator | Wednesday 25 March 2026 04:39:06 +0000 (0:00:02.965) 0:03:26.094 ******* 2026-03-25 04:39:14.281660 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:14.281664 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:14.281668 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:14.281672 | orchestrator | 2026-03-25 04:39:14.281676 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-25 04:39:14.281680 | orchestrator | Wednesday 25 March 2026 04:39:07 +0000 (0:00:01.381) 0:03:27.476 ******* 2026-03-25 04:39:14.281684 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:39:14.281688 | orchestrator | 2026-03-25 04:39:14.281692 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-25 04:39:14.281697 | orchestrator | Wednesday 25 March 2026 04:39:09 +0000 (0:00:01.822) 0:03:29.298 ******* 2026-03-25 04:39:14.281702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:39:14.281710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:39:31.317637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:39:31.317726 | orchestrator | 2026-03-25 04:39:31.317737 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-25 04:39:31.317744 | orchestrator | Wednesday 25 March 2026 04:39:14 +0000 (0:00:04.831) 0:03:34.130 ******* 2026-03-25 04:39:31.317753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:39:31.317777 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:31.317785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:39:31.317792 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:31.317798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:39:31.317805 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:31.317811 | orchestrator | 2026-03-25 04:39:31.317817 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-25 04:39:31.317823 | orchestrator | Wednesday 25 March 2026 04:39:16 +0000 (0:00:01.772) 0:03:35.903 ******* 2026-03-25 04:39:31.317831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:39:31.317839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:39:31.317847 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:31.317936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:39:31.317950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:39:31.317957 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:31.317963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:39:31.317976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:39:31.317983 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:31.317989 | orchestrator | 2026-03-25 04:39:31.317996 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-25 04:39:31.318002 | orchestrator | Wednesday 25 March 2026 04:39:17 +0000 (0:00:01.558) 0:03:37.462 ******* 2026-03-25 04:39:31.318008 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:39:31.318051 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:39:31.318070 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:39:31.318077 | orchestrator | 2026-03-25 04:39:31.318083 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-25 04:39:31.318089 | orchestrator | Wednesday 25 March 2026 04:39:19 +0000 (0:00:02.291) 0:03:39.754 ******* 2026-03-25 04:39:31.318096 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:39:31.318102 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:39:31.318154 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:39:31.318161 | orchestrator | 2026-03-25 04:39:31.318167 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-25 04:39:31.318202 | orchestrator | Wednesday 25 March 2026 04:39:22 +0000 (0:00:02.935) 0:03:42.689 ******* 2026-03-25 04:39:31.318210 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:31.318218 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:31.318225 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:31.318232 | orchestrator | 2026-03-25 04:39:31.318239 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-25 04:39:31.318247 | orchestrator | Wednesday 25 March 2026 04:39:24 +0000 (0:00:01.442) 0:03:44.132 ******* 2026-03-25 04:39:31.318254 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:39:31.318261 | orchestrator | 2026-03-25 04:39:31.318268 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-25 04:39:31.318275 | orchestrator | Wednesday 25 March 2026 04:39:26 +0000 (0:00:01.856) 0:03:45.988 ******* 2026-03-25 04:39:31.318300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 04:39:33.100745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 04:39:33.100937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-25 04:39:33.100984 | orchestrator | 2026-03-25 04:39:33.100998 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-25 04:39:33.101010 | orchestrator | Wednesday 25 March 2026 04:39:31 +0000 (0:00:05.180) 0:03:51.169 ******* 2026-03-25 04:39:33.101023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 04:39:33.101036 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:33.101159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 04:39:42.007508 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:42.007626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-25 04:39:42.007649 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:42.007685 | orchestrator | 2026-03-25 04:39:42.007698 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-25 04:39:42.007710 | orchestrator | Wednesday 25 March 2026 04:39:33 +0000 (0:00:01.785) 0:03:52.955 ******* 2026-03-25 04:39:42.007723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-25 04:39:42.007752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 04:39:42.007767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-25 04:39:42.007780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 04:39:42.007791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-25 04:39:42.007805 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:42.007850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-25 04:39:42.007877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 04:39:42.007975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-25 04:39:42.007994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 04:39:42.008012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-25 04:39:42.008030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-25 04:39:42.008062 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:42.008083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 04:39:42.008104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-25 04:39:42.008124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-25 04:39:42.008144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-25 04:39:42.008170 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:42.008189 | orchestrator | 2026-03-25 04:39:42.008207 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-25 04:39:42.008227 | orchestrator | Wednesday 25 March 2026 04:39:35 +0000 (0:00:02.019) 0:03:54.974 ******* 2026-03-25 04:39:42.008245 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:39:42.008264 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:39:42.008283 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:39:42.008302 | orchestrator | 2026-03-25 04:39:42.008320 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-25 04:39:42.008336 | orchestrator | Wednesday 25 March 2026 04:39:37 +0000 (0:00:02.227) 0:03:57.202 ******* 2026-03-25 04:39:42.008355 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:39:42.008374 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:39:42.008393 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:39:42.008409 | orchestrator | 2026-03-25 04:39:42.008427 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-25 04:39:42.008445 | orchestrator | Wednesday 25 March 2026 04:39:40 +0000 (0:00:03.025) 0:04:00.227 ******* 2026-03-25 04:39:42.008462 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:42.008481 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:42.008499 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:42.008518 | orchestrator | 2026-03-25 04:39:42.008537 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-25 04:39:42.008555 | orchestrator | Wednesday 25 March 2026 04:39:41 +0000 (0:00:01.397) 0:04:01.625 ******* 2026-03-25 04:39:42.008589 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:52.058710 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:52.058843 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:52.058862 | orchestrator | 2026-03-25 04:39:52.058876 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-25 04:39:52.058889 | orchestrator | Wednesday 25 March 2026 04:39:43 +0000 (0:00:01.384) 0:04:03.009 ******* 2026-03-25 04:39:52.058900 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:39:52.059001 | orchestrator | 2026-03-25 04:39:52.059014 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-25 04:39:52.059025 | orchestrator | Wednesday 25 March 2026 04:39:45 +0000 (0:00:02.006) 0:04:05.015 ******* 2026-03-25 04:39:52.059042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-25 04:39:52.059085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 04:39:52.059100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 04:39:52.059128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-25 04:39:52.059163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 04:39:52.059176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 04:39:52.059198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-25 04:39:52.059210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 04:39:52.059229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 04:39:52.059242 | orchestrator | 2026-03-25 04:39:52.059255 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-25 04:39:52.059270 | orchestrator | Wednesday 25 March 2026 04:39:50 +0000 (0:00:04.885) 0:04:09.901 ******* 2026-03-25 04:39:52.059292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-25 04:39:53.729963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 04:39:53.730136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 04:39:53.730157 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:53.730174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-25 04:39:53.730205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 04:39:53.730217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 04:39:53.730251 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:53.730283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-25 04:39:53.730297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-25 04:39:53.730308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-25 04:39:53.730319 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:53.730331 | orchestrator | 2026-03-25 04:39:53.730343 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-25 04:39:53.730355 | orchestrator | Wednesday 25 March 2026 04:39:52 +0000 (0:00:02.001) 0:04:11.902 ******* 2026-03-25 04:39:53.730368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-25 04:39:53.730387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-25 04:39:53.730400 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:39:53.730412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-25 04:39:53.730423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-25 04:39:53.730443 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:39:53.730457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-25 04:39:53.730469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-25 04:39:53.730482 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:39:53.730494 | orchestrator | 2026-03-25 04:39:53.730506 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-25 04:39:53.730526 | orchestrator | Wednesday 25 March 2026 04:39:53 +0000 (0:00:01.672) 0:04:13.574 ******* 2026-03-25 04:40:09.190278 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:40:09.190393 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:40:09.190420 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:40:09.190440 | orchestrator | 2026-03-25 04:40:09.190458 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-25 04:40:09.190470 | orchestrator | Wednesday 25 March 2026 04:39:55 +0000 (0:00:02.173) 0:04:15.748 ******* 2026-03-25 04:40:09.190481 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:40:09.190492 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:40:09.190503 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:40:09.190513 | orchestrator | 2026-03-25 04:40:09.190524 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-25 04:40:09.190535 | orchestrator | Wednesday 25 March 2026 04:39:59 +0000 (0:00:03.374) 0:04:19.122 ******* 2026-03-25 04:40:09.190546 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:40:09.190557 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:40:09.190568 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:40:09.190579 | orchestrator | 2026-03-25 04:40:09.190589 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-25 04:40:09.190600 | orchestrator | Wednesday 25 March 2026 04:40:00 +0000 (0:00:01.451) 0:04:20.574 ******* 2026-03-25 04:40:09.190611 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:40:09.190621 | orchestrator | 2026-03-25 04:40:09.190632 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-25 04:40:09.190643 | orchestrator | Wednesday 25 March 2026 04:40:02 +0000 (0:00:01.835) 0:04:22.409 ******* 2026-03-25 04:40:09.190660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:40:09.190693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:40:09.190728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:40:09.190760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:40:09.190774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:40:09.190786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:40:09.190805 | orchestrator | 2026-03-25 04:40:09.190821 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-25 04:40:09.190835 | orchestrator | Wednesday 25 March 2026 04:40:07 +0000 (0:00:04.971) 0:04:27.381 ******* 2026-03-25 04:40:09.190849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:40:09.190870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:40:22.405606 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:40:22.405740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:40:22.405776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:40:22.405830 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:40:22.405860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:40:22.405873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:40:22.405885 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:40:22.405896 | orchestrator | 2026-03-25 04:40:22.405908 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-25 04:40:22.405920 | orchestrator | Wednesday 25 March 2026 04:40:09 +0000 (0:00:01.660) 0:04:29.041 ******* 2026-03-25 04:40:22.405949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:22.405964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:22.406106 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:40:22.406125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:22.406139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:22.406151 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:40:22.406165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:22.406177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:22.406201 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:40:22.406214 | orchestrator | 2026-03-25 04:40:22.406226 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-25 04:40:22.406239 | orchestrator | Wednesday 25 March 2026 04:40:10 +0000 (0:00:01.807) 0:04:30.848 ******* 2026-03-25 04:40:22.406250 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:40:22.406264 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:40:22.406276 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:40:22.406288 | orchestrator | 2026-03-25 04:40:22.406299 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-25 04:40:22.406312 | orchestrator | Wednesday 25 March 2026 04:40:13 +0000 (0:00:02.253) 0:04:33.102 ******* 2026-03-25 04:40:22.406323 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:40:22.406335 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:40:22.406347 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:40:22.406359 | orchestrator | 2026-03-25 04:40:22.406372 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-25 04:40:22.406385 | orchestrator | Wednesday 25 March 2026 04:40:16 +0000 (0:00:03.128) 0:04:36.231 ******* 2026-03-25 04:40:22.406396 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:40:22.406408 | orchestrator | 2026-03-25 04:40:22.406427 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-25 04:40:22.406440 | orchestrator | Wednesday 25 March 2026 04:40:18 +0000 (0:00:02.290) 0:04:38.522 ******* 2026-03-25 04:40:22.406455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:40:22.406470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:40:22.406499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 04:40:24.180643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 04:40:24.180771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:40:24.180804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:40:24.180818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 04:40:24.180830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 04:40:24.180860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:40:24.180880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:40:24.180897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 04:40:24.180909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 04:40:24.180921 | orchestrator | 2026-03-25 04:40:24.180934 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-25 04:40:24.180946 | orchestrator | Wednesday 25 March 2026 04:40:23 +0000 (0:00:04.861) 0:04:43.383 ******* 2026-03-25 04:40:24.180959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:40:24.180978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:40:27.292211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 04:40:27.292307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 04:40:27.292316 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:40:27.292338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:40:27.292344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:40:27.292349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 04:40:27.292382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 04:40:27.292388 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:40:27.292392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:40:27.292400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:40:27.292405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-25 04:40:27.292410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-25 04:40:27.292414 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:40:27.292419 | orchestrator | 2026-03-25 04:40:27.292424 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-25 04:40:27.292430 | orchestrator | Wednesday 25 March 2026 04:40:25 +0000 (0:00:01.751) 0:04:45.135 ******* 2026-03-25 04:40:27.292441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:27.292449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:27.292455 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:40:27.292460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:27.292468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:42.893826 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:40:42.893941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:42.893960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:40:42.893974 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:40:42.893985 | orchestrator | 2026-03-25 04:40:42.893997 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-25 04:40:42.894009 | orchestrator | Wednesday 25 March 2026 04:40:27 +0000 (0:00:02.001) 0:04:47.137 ******* 2026-03-25 04:40:42.894156 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:40:42.894172 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:40:42.894183 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:40:42.894193 | orchestrator | 2026-03-25 04:40:42.894205 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-25 04:40:42.894216 | orchestrator | Wednesday 25 March 2026 04:40:29 +0000 (0:00:02.363) 0:04:49.500 ******* 2026-03-25 04:40:42.894226 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:40:42.894237 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:40:42.894248 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:40:42.894259 | orchestrator | 2026-03-25 04:40:42.894270 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-25 04:40:42.894280 | orchestrator | Wednesday 25 March 2026 04:40:32 +0000 (0:00:02.962) 0:04:52.463 ******* 2026-03-25 04:40:42.894291 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:40:42.894302 | orchestrator | 2026-03-25 04:40:42.894313 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-25 04:40:42.894340 | orchestrator | Wednesday 25 March 2026 04:40:35 +0000 (0:00:02.556) 0:04:55.019 ******* 2026-03-25 04:40:42.894351 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 04:40:42.894362 | orchestrator | 2026-03-25 04:40:42.894376 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-25 04:40:42.894388 | orchestrator | Wednesday 25 March 2026 04:40:39 +0000 (0:00:04.048) 0:04:59.068 ******* 2026-03-25 04:40:42.894407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:40:42.894465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 04:40:42.894480 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:40:42.894525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:40:42.894538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 04:40:42.894558 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:40:42.894580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:40:46.716533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 04:40:46.716641 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:40:46.716658 | orchestrator | 2026-03-25 04:40:46.716670 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-25 04:40:46.716681 | orchestrator | Wednesday 25 March 2026 04:40:42 +0000 (0:00:03.672) 0:05:02.740 ******* 2026-03-25 04:40:46.716713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:40:46.716749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 04:40:46.716761 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:40:46.716799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:40:46.716814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 04:40:46.716833 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:40:46.716846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:40:46.716865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-25 04:41:03.582879 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:03.582996 | orchestrator | 2026-03-25 04:41:03.583013 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-25 04:41:03.583027 | orchestrator | Wednesday 25 March 2026 04:40:46 +0000 (0:00:03.823) 0:05:06.564 ******* 2026-03-25 04:41:03.583041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 04:41:03.583076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 04:41:03.583161 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:03.583175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 04:41:03.583186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 04:41:03.583197 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:03.583209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 04:41:03.583220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-25 04:41:03.583231 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:03.583242 | orchestrator | 2026-03-25 04:41:03.583253 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-25 04:41:03.583265 | orchestrator | Wednesday 25 March 2026 04:40:50 +0000 (0:00:04.290) 0:05:10.855 ******* 2026-03-25 04:41:03.583276 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:41:03.583305 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:41:03.583316 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:41:03.583327 | orchestrator | 2026-03-25 04:41:03.583338 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-25 04:41:03.583348 | orchestrator | Wednesday 25 March 2026 04:40:54 +0000 (0:00:03.085) 0:05:13.940 ******* 2026-03-25 04:41:03.583359 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:03.583370 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:03.583380 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:03.583391 | orchestrator | 2026-03-25 04:41:03.583410 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-25 04:41:03.583423 | orchestrator | Wednesday 25 March 2026 04:40:56 +0000 (0:00:02.815) 0:05:16.756 ******* 2026-03-25 04:41:03.583435 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:03.583448 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:03.583460 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:03.583473 | orchestrator | 2026-03-25 04:41:03.583486 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-25 04:41:03.583498 | orchestrator | Wednesday 25 March 2026 04:40:58 +0000 (0:00:01.492) 0:05:18.249 ******* 2026-03-25 04:41:03.583510 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:41:03.583523 | orchestrator | 2026-03-25 04:41:03.583541 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-25 04:41:03.583554 | orchestrator | Wednesday 25 March 2026 04:41:00 +0000 (0:00:02.489) 0:05:20.738 ******* 2026-03-25 04:41:03.583568 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-25 04:41:03.583583 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-25 04:41:03.583596 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-25 04:41:03.583609 | orchestrator | 2026-03-25 04:41:03.583622 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-25 04:41:03.583636 | orchestrator | Wednesday 25 March 2026 04:41:03 +0000 (0:00:02.555) 0:05:23.293 ******* 2026-03-25 04:41:03.583657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-25 04:41:19.221474 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:19.221607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-25 04:41:19.221628 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:19.221641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-25 04:41:19.221653 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:19.221664 | orchestrator | 2026-03-25 04:41:19.221676 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-25 04:41:19.221688 | orchestrator | Wednesday 25 March 2026 04:41:05 +0000 (0:00:01.789) 0:05:25.083 ******* 2026-03-25 04:41:19.221701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-25 04:41:19.221714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-25 04:41:19.221725 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:19.221736 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:19.221747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-25 04:41:19.221758 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:19.221768 | orchestrator | 2026-03-25 04:41:19.221779 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-25 04:41:19.221790 | orchestrator | Wednesday 25 March 2026 04:41:06 +0000 (0:00:01.647) 0:05:26.731 ******* 2026-03-25 04:41:19.221801 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:19.221812 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:19.221822 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:19.221833 | orchestrator | 2026-03-25 04:41:19.221844 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-25 04:41:19.221878 | orchestrator | Wednesday 25 March 2026 04:41:08 +0000 (0:00:01.462) 0:05:28.194 ******* 2026-03-25 04:41:19.221890 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:19.221901 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:19.221911 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:19.221922 | orchestrator | 2026-03-25 04:41:19.221933 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-25 04:41:19.221943 | orchestrator | Wednesday 25 March 2026 04:41:10 +0000 (0:00:02.613) 0:05:30.807 ******* 2026-03-25 04:41:19.221954 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:19.221965 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:19.221975 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:19.221986 | orchestrator | 2026-03-25 04:41:19.221998 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-25 04:41:19.222011 | orchestrator | Wednesday 25 March 2026 04:41:12 +0000 (0:00:01.418) 0:05:32.226 ******* 2026-03-25 04:41:19.222088 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:41:19.222101 | orchestrator | 2026-03-25 04:41:19.222113 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-25 04:41:19.222125 | orchestrator | Wednesday 25 March 2026 04:41:14 +0000 (0:00:02.072) 0:05:34.299 ******* 2026-03-25 04:41:19.222245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:41:19.222266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.222283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-25 04:41:19.222309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-25 04:41:19.222332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.340260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:19.340358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:19.340375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 04:41:19.340388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:19.340423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.340437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-25 04:41:19.340469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:19.340491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:41:19.340505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.340556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.340569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-25 04:41:19.340595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 04:41:19.518286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-25 04:41:19.617542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.617646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:19.617674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:19.617698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:19.617729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 04:41:19.617776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:19.617790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.617814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-25 04:41:19.617826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:19.617837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.617865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:41:19.780424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 04:41:19.780545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:19.780563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.780576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-25 04:41:19.780652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-25 04:41:19.780675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:19.780689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:19.780703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:19.780715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 04:41:19.780727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:19.780752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:22.050811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-25 04:41:22.050917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:22.050936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:22.050962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 04:41:22.051006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:22.051027 | orchestrator | 2026-03-25 04:41:22.051049 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-25 04:41:22.051070 | orchestrator | Wednesday 25 March 2026 04:41:20 +0000 (0:00:06.487) 0:05:40.786 ******* 2026-03-25 04:41:22.051118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:41:22.051271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:22.051289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-25 04:41:22.051309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-25 04:41:22.051333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:22.144540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:22.144631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:22.144643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 04:41:22.144654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:22.144682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:41:22.144708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:22.144737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:22.144747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-25 04:41:22.144757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-25 04:41:22.144767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:22.144776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:22.144799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-25 04:41:22.227357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 04:41:22.227457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:22.227517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:22.227537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:22.227570 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:22.227585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:22.227614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:41:22.227628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 04:41:22.227640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:22.227652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:22.227676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-25 04:41:22.227695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:23.459640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-25 04:41:23.459765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-25 04:41:23.459790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:23.459859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:23.459875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:23.459895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:23.459935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:23.459955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 04:41:23.459975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-25 04:41:23.460021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:23.460040 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:23.460056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:23.460080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:38.652848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-25 04:41:38.652976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-25 04:41:38.652995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-25 04:41:38.653052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-25 04:41:38.653069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-25 04:41:38.653082 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:38.653096 | orchestrator | 2026-03-25 04:41:38.653108 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-25 04:41:38.653120 | orchestrator | Wednesday 25 March 2026 04:41:23 +0000 (0:00:02.523) 0:05:43.310 ******* 2026-03-25 04:41:38.653133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:41:38.653165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:41:38.653179 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:38.653190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:41:38.653285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:41:38.653307 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:38.653326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:41:38.653344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:41:38.653380 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:38.653398 | orchestrator | 2026-03-25 04:41:38.653417 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-25 04:41:38.653436 | orchestrator | Wednesday 25 March 2026 04:41:26 +0000 (0:00:02.910) 0:05:46.221 ******* 2026-03-25 04:41:38.653455 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:41:38.653475 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:41:38.653493 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:41:38.653512 | orchestrator | 2026-03-25 04:41:38.653530 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-25 04:41:38.653548 | orchestrator | Wednesday 25 March 2026 04:41:28 +0000 (0:00:02.256) 0:05:48.477 ******* 2026-03-25 04:41:38.653565 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:41:38.653583 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:41:38.653602 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:41:38.653621 | orchestrator | 2026-03-25 04:41:38.653639 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-25 04:41:38.653659 | orchestrator | Wednesday 25 March 2026 04:41:31 +0000 (0:00:02.993) 0:05:51.471 ******* 2026-03-25 04:41:38.653673 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:41:38.653687 | orchestrator | 2026-03-25 04:41:38.653706 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-25 04:41:38.653725 | orchestrator | Wednesday 25 March 2026 04:41:33 +0000 (0:00:02.303) 0:05:53.774 ******* 2026-03-25 04:41:38.653754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-25 04:41:38.653795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-25 04:41:55.678474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-25 04:41:55.678622 | orchestrator | 2026-03-25 04:41:55.678642 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-25 04:41:55.678655 | orchestrator | Wednesday 25 March 2026 04:41:38 +0000 (0:00:04.723) 0:05:58.498 ******* 2026-03-25 04:41:55.678683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-25 04:41:55.678696 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:55.678709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-25 04:41:55.678721 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:55.678753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-25 04:41:55.678775 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:55.678786 | orchestrator | 2026-03-25 04:41:55.678797 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-25 04:41:55.678808 | orchestrator | Wednesday 25 March 2026 04:41:40 +0000 (0:00:01.611) 0:06:00.109 ******* 2026-03-25 04:41:55.678821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:41:55.678836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:41:55.678848 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:55.678860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:41:55.678871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:41:55.678882 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:55.678893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:41:55.678910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:41:55.678921 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:41:55.678932 | orchestrator | 2026-03-25 04:41:55.678944 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-25 04:41:55.678955 | orchestrator | Wednesday 25 March 2026 04:41:42 +0000 (0:00:01.911) 0:06:02.021 ******* 2026-03-25 04:41:55.678966 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:41:55.678977 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:41:55.678988 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:41:55.679000 | orchestrator | 2026-03-25 04:41:55.679012 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-25 04:41:55.679025 | orchestrator | Wednesday 25 March 2026 04:41:44 +0000 (0:00:02.345) 0:06:04.366 ******* 2026-03-25 04:41:55.679037 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:41:55.679049 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:41:55.679061 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:41:55.679073 | orchestrator | 2026-03-25 04:41:55.679085 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-25 04:41:55.679098 | orchestrator | Wednesday 25 March 2026 04:41:47 +0000 (0:00:03.043) 0:06:07.409 ******* 2026-03-25 04:41:55.679110 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:41:55.679123 | orchestrator | 2026-03-25 04:41:55.679135 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-25 04:41:55.679158 | orchestrator | Wednesday 25 March 2026 04:41:49 +0000 (0:00:02.386) 0:06:09.796 ******* 2026-03-25 04:41:55.679180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:41:56.820974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:41:56.821131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:41:56.821162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:41:56.821213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:41:56.821304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:41:56.821320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:41:56.821339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:41:56.821353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:41:56.821386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:41:56.821418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:41:57.575398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:41:57.575505 | orchestrator | 2026-03-25 04:41:57.575522 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-25 04:41:57.575535 | orchestrator | Wednesday 25 March 2026 04:41:56 +0000 (0:00:06.880) 0:06:16.677 ******* 2026-03-25 04:41:57.575569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:41:57.575586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:41:57.575621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:41:57.575652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:41:57.575665 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:41:57.575678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:41:57.575696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:41:57.575717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:41:57.575729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:41:57.575740 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:41:57.575761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:42:17.174089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:42:17.174225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-25 04:42:17.174272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-25 04:42:17.174354 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:42:17.174373 | orchestrator | 2026-03-25 04:42:17.174388 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-25 04:42:17.174405 | orchestrator | Wednesday 25 March 2026 04:41:58 +0000 (0:00:01.904) 0:06:18.581 ******* 2026-03-25 04:42:17.174421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174489 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:42:17.174504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174574 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:42:17.174583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:42:17.174643 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:42:17.174653 | orchestrator | 2026-03-25 04:42:17.174663 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-25 04:42:17.174673 | orchestrator | Wednesday 25 March 2026 04:42:01 +0000 (0:00:02.753) 0:06:21.334 ******* 2026-03-25 04:42:17.174683 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:42:17.174693 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:42:17.174703 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:42:17.174712 | orchestrator | 2026-03-25 04:42:17.174722 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-25 04:42:17.174731 | orchestrator | Wednesday 25 March 2026 04:42:03 +0000 (0:00:02.322) 0:06:23.657 ******* 2026-03-25 04:42:17.174741 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:42:17.174751 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:42:17.174760 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:42:17.174770 | orchestrator | 2026-03-25 04:42:17.174779 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-25 04:42:17.174789 | orchestrator | Wednesday 25 March 2026 04:42:06 +0000 (0:00:03.071) 0:06:26.728 ******* 2026-03-25 04:42:17.174798 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:42:17.174808 | orchestrator | 2026-03-25 04:42:17.174818 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-25 04:42:17.174828 | orchestrator | Wednesday 25 March 2026 04:42:09 +0000 (0:00:02.806) 0:06:29.535 ******* 2026-03-25 04:42:17.174838 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-25 04:42:17.174849 | orchestrator | 2026-03-25 04:42:17.174859 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-25 04:42:17.174868 | orchestrator | Wednesday 25 March 2026 04:42:11 +0000 (0:00:01.757) 0:06:31.292 ******* 2026-03-25 04:42:17.174880 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-25 04:42:17.174892 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-25 04:42:17.174911 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-25 04:42:36.986615 | orchestrator | 2026-03-25 04:42:36.986755 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-25 04:42:36.986784 | orchestrator | Wednesday 25 March 2026 04:42:17 +0000 (0:00:05.721) 0:06:37.014 ******* 2026-03-25 04:42:36.986831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 04:42:36.986856 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:42:36.986879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 04:42:36.986899 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:42:36.986920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 04:42:36.986941 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:42:36.986961 | orchestrator | 2026-03-25 04:42:36.986980 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-25 04:42:36.987001 | orchestrator | Wednesday 25 March 2026 04:42:19 +0000 (0:00:02.440) 0:06:39.455 ******* 2026-03-25 04:42:36.987023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 04:42:36.987047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 04:42:36.987070 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:42:36.987090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 04:42:36.987114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 04:42:36.987136 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:42:36.987195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 04:42:36.987217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-25 04:42:36.987239 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:42:36.987262 | orchestrator | 2026-03-25 04:42:36.987283 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-25 04:42:36.987303 | orchestrator | Wednesday 25 March 2026 04:42:22 +0000 (0:00:02.717) 0:06:42.172 ******* 2026-03-25 04:42:36.987323 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:42:36.987413 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:42:36.987433 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:42:36.987451 | orchestrator | 2026-03-25 04:42:36.987470 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-25 04:42:36.987488 | orchestrator | Wednesday 25 March 2026 04:42:26 +0000 (0:00:03.849) 0:06:46.021 ******* 2026-03-25 04:42:36.987506 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:42:36.987524 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:42:36.987571 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:42:36.987590 | orchestrator | 2026-03-25 04:42:36.987608 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-25 04:42:36.987626 | orchestrator | Wednesday 25 March 2026 04:42:30 +0000 (0:00:04.021) 0:06:50.043 ******* 2026-03-25 04:42:36.987647 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-25 04:42:36.987666 | orchestrator | 2026-03-25 04:42:36.987684 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-25 04:42:36.987701 | orchestrator | Wednesday 25 March 2026 04:42:31 +0000 (0:00:01.737) 0:06:51.780 ******* 2026-03-25 04:42:36.987734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 04:42:36.987757 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:42:36.987778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 04:42:36.987799 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:42:36.987819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 04:42:36.987838 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:42:36.987856 | orchestrator | 2026-03-25 04:42:36.987893 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-25 04:42:36.987914 | orchestrator | Wednesday 25 March 2026 04:42:34 +0000 (0:00:02.482) 0:06:54.263 ******* 2026-03-25 04:42:36.987932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 04:42:36.987949 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:42:36.987966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 04:42:36.987984 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:42:36.988019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-25 04:43:12.277136 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:12.277283 | orchestrator | 2026-03-25 04:43:12.277312 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-25 04:43:12.277335 | orchestrator | Wednesday 25 March 2026 04:42:36 +0000 (0:00:02.558) 0:06:56.821 ******* 2026-03-25 04:43:12.277356 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:12.277374 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:12.277393 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:12.277411 | orchestrator | 2026-03-25 04:43:12.277493 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-25 04:43:12.277514 | orchestrator | Wednesday 25 March 2026 04:42:39 +0000 (0:00:02.423) 0:06:59.245 ******* 2026-03-25 04:43:12.277534 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:43:12.277555 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:43:12.277575 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:43:12.277595 | orchestrator | 2026-03-25 04:43:12.277637 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-25 04:43:12.277659 | orchestrator | Wednesday 25 March 2026 04:42:42 +0000 (0:00:03.503) 0:07:02.748 ******* 2026-03-25 04:43:12.277682 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:43:12.277703 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:43:12.277725 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:43:12.277746 | orchestrator | 2026-03-25 04:43:12.277766 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-25 04:43:12.277787 | orchestrator | Wednesday 25 March 2026 04:42:47 +0000 (0:00:04.129) 0:07:06.878 ******* 2026-03-25 04:43:12.277808 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-25 04:43:12.277829 | orchestrator | 2026-03-25 04:43:12.277849 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-25 04:43:12.277869 | orchestrator | Wednesday 25 March 2026 04:42:49 +0000 (0:00:02.422) 0:07:09.301 ******* 2026-03-25 04:43:12.277927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 04:43:12.277952 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:12.277973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 04:43:12.277994 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:12.278015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 04:43:12.278141 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:12.278162 | orchestrator | 2026-03-25 04:43:12.278180 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-25 04:43:12.278200 | orchestrator | Wednesday 25 March 2026 04:42:51 +0000 (0:00:02.440) 0:07:11.741 ******* 2026-03-25 04:43:12.278219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 04:43:12.278237 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:12.278288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 04:43:12.278308 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:12.278337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-25 04:43:12.278362 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:12.278397 | orchestrator | 2026-03-25 04:43:12.278443 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-25 04:43:12.278464 | orchestrator | Wednesday 25 March 2026 04:42:54 +0000 (0:00:02.500) 0:07:14.242 ******* 2026-03-25 04:43:12.278484 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:12.278503 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:12.278523 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:12.278542 | orchestrator | 2026-03-25 04:43:12.278562 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-25 04:43:12.278580 | orchestrator | Wednesday 25 March 2026 04:42:56 +0000 (0:00:02.613) 0:07:16.856 ******* 2026-03-25 04:43:12.278596 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:43:12.278614 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:43:12.278632 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:43:12.278651 | orchestrator | 2026-03-25 04:43:12.278669 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-25 04:43:12.278687 | orchestrator | Wednesday 25 March 2026 04:43:00 +0000 (0:00:03.593) 0:07:20.450 ******* 2026-03-25 04:43:12.278705 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:43:12.278724 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:43:12.278737 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:43:12.278747 | orchestrator | 2026-03-25 04:43:12.278758 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-25 04:43:12.278769 | orchestrator | Wednesday 25 March 2026 04:43:04 +0000 (0:00:04.326) 0:07:24.776 ******* 2026-03-25 04:43:12.278780 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:43:12.278790 | orchestrator | 2026-03-25 04:43:12.278801 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-25 04:43:12.278811 | orchestrator | Wednesday 25 March 2026 04:43:07 +0000 (0:00:02.526) 0:07:27.302 ******* 2026-03-25 04:43:12.278825 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 04:43:12.278839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 04:43:12.278864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 04:43:13.428085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 04:43:13.428188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:43:13.428205 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 04:43:13.428220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 04:43:13.428233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 04:43:13.428262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 04:43:13.428296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:43:13.428310 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-25 04:43:13.428322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 04:43:13.428334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 04:43:13.428345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 04:43:13.428357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:43:13.428376 | orchestrator | 2026-03-25 04:43:13.428397 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-25 04:43:14.421760 | orchestrator | Wednesday 25 March 2026 04:43:13 +0000 (0:00:05.979) 0:07:33.282 ******* 2026-03-25 04:43:14.421862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 04:43:14.421885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 04:43:14.421897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 04:43:14.421909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 04:43:14.421920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:43:14.421950 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:14.421987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 04:43:14.421999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 04:43:14.422010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 04:43:14.422091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 04:43:14.422102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:43:14.422112 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:14.422122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-25 04:43:14.422149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-25 04:43:33.121325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-25 04:43:33.121441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-25 04:43:33.121458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-25 04:43:33.121529 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:33.121545 | orchestrator | 2026-03-25 04:43:33.121557 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-25 04:43:33.121569 | orchestrator | Wednesday 25 March 2026 04:43:15 +0000 (0:00:02.173) 0:07:35.455 ******* 2026-03-25 04:43:33.121581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 04:43:33.121594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 04:43:33.121632 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:33.121644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 04:43:33.121656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 04:43:33.121667 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:33.121678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 04:43:33.121688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-25 04:43:33.121700 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:33.121710 | orchestrator | 2026-03-25 04:43:33.121721 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-25 04:43:33.121732 | orchestrator | Wednesday 25 March 2026 04:43:17 +0000 (0:00:02.120) 0:07:37.576 ******* 2026-03-25 04:43:33.121743 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:43:33.121754 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:43:33.121765 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:43:33.121775 | orchestrator | 2026-03-25 04:43:33.121786 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-25 04:43:33.121797 | orchestrator | Wednesday 25 March 2026 04:43:19 +0000 (0:00:02.261) 0:07:39.837 ******* 2026-03-25 04:43:33.121808 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:43:33.121818 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:43:33.121853 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:43:33.121867 | orchestrator | 2026-03-25 04:43:33.121880 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-25 04:43:33.121892 | orchestrator | Wednesday 25 March 2026 04:43:22 +0000 (0:00:02.960) 0:07:42.798 ******* 2026-03-25 04:43:33.121904 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:43:33.121917 | orchestrator | 2026-03-25 04:43:33.121929 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-25 04:43:33.121942 | orchestrator | Wednesday 25 March 2026 04:43:25 +0000 (0:00:02.686) 0:07:45.484 ******* 2026-03-25 04:43:33.121956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:43:33.121974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:43:33.121996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:43:33.122088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:43:35.390275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:43:35.390383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:43:35.390426 | orchestrator | 2026-03-25 04:43:35.390442 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-25 04:43:35.390454 | orchestrator | Wednesday 25 March 2026 04:43:33 +0000 (0:00:07.484) 0:07:52.969 ******* 2026-03-25 04:43:35.390466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:43:35.390566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:43:35.390582 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:35.390594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:43:35.390615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:43:35.390627 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:35.390638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:43:35.390665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:43:46.384015 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:46.384124 | orchestrator | 2026-03-25 04:43:46.384140 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-25 04:43:46.384175 | orchestrator | Wednesday 25 March 2026 04:43:35 +0000 (0:00:02.273) 0:07:55.242 ******* 2026-03-25 04:43:46.384186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:43:46.384200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-25 04:43:46.384213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-25 04:43:46.384225 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:46.384235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:43:46.384245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-25 04:43:46.384255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-25 04:43:46.384265 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:46.384275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:43:46.384285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-25 04:43:46.384295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-25 04:43:46.384304 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:46.384314 | orchestrator | 2026-03-25 04:43:46.384324 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-25 04:43:46.384333 | orchestrator | Wednesday 25 March 2026 04:43:37 +0000 (0:00:01.756) 0:07:56.999 ******* 2026-03-25 04:43:46.384343 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:46.384352 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:46.384377 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:46.384387 | orchestrator | 2026-03-25 04:43:46.384397 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-25 04:43:46.384407 | orchestrator | Wednesday 25 March 2026 04:43:38 +0000 (0:00:01.510) 0:07:58.509 ******* 2026-03-25 04:43:46.384416 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:46.384426 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:46.384435 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:43:46.384444 | orchestrator | 2026-03-25 04:43:46.384461 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-25 04:43:46.384470 | orchestrator | Wednesday 25 March 2026 04:43:40 +0000 (0:00:02.332) 0:08:00.841 ******* 2026-03-25 04:43:46.384480 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:43:46.384490 | orchestrator | 2026-03-25 04:43:46.384529 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-25 04:43:46.384540 | orchestrator | Wednesday 25 March 2026 04:43:43 +0000 (0:00:02.642) 0:08:03.484 ******* 2026-03-25 04:43:46.384571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-25 04:43:46.384587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 04:43:46.384599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-25 04:43:46.384611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:46.384628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 04:43:46.384653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:48.211332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:48.211438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 04:43:48.211454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:48.211467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 04:43:48.211560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-25 04:43:48.211605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 04:43:48.211636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:48.211649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:48.211661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 04:43:48.211673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:43:48.211692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:43:48.211721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-25 04:43:50.724765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-25 04:43:50.724894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.724913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.724925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.724978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.724991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 04:43:50.725002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 04:43:50.725038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:43:50.725052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-25 04:43:50.725063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.725092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.725103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 04:43:50.725116 | orchestrator | 2026-03-25 04:43:50.725130 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-25 04:43:50.725142 | orchestrator | Wednesday 25 March 2026 04:43:49 +0000 (0:00:06.081) 0:08:09.565 ******* 2026-03-25 04:43:50.725165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-25 04:43:50.909750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 04:43:50.909848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.909863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.909913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 04:43:50.909929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:43:50.909958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-25 04:43:50.909972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.909984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.910003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 04:43:50.910074 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:43:50.910100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-25 04:43:50.910122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 04:43:50.910141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:50.910174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:52.105814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 04:43:52.105952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:43:52.105987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-25 04:43:52.106001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:52.106074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:52.106109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 04:43:52.106123 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:43:52.106136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-25 04:43:52.106174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-25 04:43:52.106187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:52.106199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:43:52.106210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-25 04:43:52.106230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:44:04.470933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-25 04:44:04.471066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:44:04.471084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 04:44:04.471096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-25 04:44:04.471109 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:04.471123 | orchestrator | 2026-03-25 04:44:04.471135 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-25 04:44:04.471147 | orchestrator | Wednesday 25 March 2026 04:43:52 +0000 (0:00:02.396) 0:08:11.962 ******* 2026-03-25 04:44:04.471160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-25 04:44:04.471174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-25 04:44:04.471206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:44:04.471271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:44:04.471287 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:04.471298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-25 04:44:04.471310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-25 04:44:04.471321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:44:04.471338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:44:04.471350 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:04.471361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-25 04:44:04.471372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-25 04:44:04.471384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:44:04.471395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-25 04:44:04.471406 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:04.471417 | orchestrator | 2026-03-25 04:44:04.471437 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-25 04:44:04.471448 | orchestrator | Wednesday 25 March 2026 04:43:54 +0000 (0:00:01.975) 0:08:13.937 ******* 2026-03-25 04:44:04.471459 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:04.471472 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:04.471486 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:04.471498 | orchestrator | 2026-03-25 04:44:04.471511 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-25 04:44:04.471523 | orchestrator | Wednesday 25 March 2026 04:43:56 +0000 (0:00:02.039) 0:08:15.977 ******* 2026-03-25 04:44:04.471535 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:04.471572 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:04.471584 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:04.471597 | orchestrator | 2026-03-25 04:44:04.471609 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-25 04:44:04.471622 | orchestrator | Wednesday 25 March 2026 04:43:58 +0000 (0:00:02.292) 0:08:18.270 ******* 2026-03-25 04:44:04.471634 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:44:04.471647 | orchestrator | 2026-03-25 04:44:04.471660 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-25 04:44:04.471672 | orchestrator | Wednesday 25 March 2026 04:44:00 +0000 (0:00:02.351) 0:08:20.621 ******* 2026-03-25 04:44:04.471693 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:44:22.128439 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:44:22.128548 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:44:22.128652 | orchestrator | 2026-03-25 04:44:22.128668 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-25 04:44:22.128681 | orchestrator | Wednesday 25 March 2026 04:44:04 +0000 (0:00:03.696) 0:08:24.317 ******* 2026-03-25 04:44:22.128694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 04:44:22.128794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 04:44:22.128811 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:22.128824 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:22.128844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 04:44:22.128856 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:22.128867 | orchestrator | 2026-03-25 04:44:22.128878 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-25 04:44:22.128898 | orchestrator | Wednesday 25 March 2026 04:44:05 +0000 (0:00:01.528) 0:08:25.846 ******* 2026-03-25 04:44:22.128910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-25 04:44:22.128922 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:22.128933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-25 04:44:22.128944 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:22.128955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-25 04:44:22.128966 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:22.128977 | orchestrator | 2026-03-25 04:44:22.128987 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-25 04:44:22.128998 | orchestrator | Wednesday 25 March 2026 04:44:07 +0000 (0:00:01.486) 0:08:27.332 ******* 2026-03-25 04:44:22.129009 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:22.129020 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:22.129031 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:22.129041 | orchestrator | 2026-03-25 04:44:22.129052 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-25 04:44:22.129063 | orchestrator | Wednesday 25 March 2026 04:44:09 +0000 (0:00:01.890) 0:08:29.222 ******* 2026-03-25 04:44:22.129074 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:22.129084 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:22.129095 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:22.129106 | orchestrator | 2026-03-25 04:44:22.129117 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-25 04:44:22.129127 | orchestrator | Wednesday 25 March 2026 04:44:11 +0000 (0:00:02.231) 0:08:31.454 ******* 2026-03-25 04:44:22.129139 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:44:22.129156 | orchestrator | 2026-03-25 04:44:22.129175 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-25 04:44:22.129194 | orchestrator | Wednesday 25 March 2026 04:44:13 +0000 (0:00:02.316) 0:08:33.771 ******* 2026-03-25 04:44:22.129214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-25 04:44:22.129241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-25 04:44:23.904566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-25 04:44:23.904711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-25 04:44:23.904730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-25 04:44:23.904763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-25 04:44:23.904798 | orchestrator | 2026-03-25 04:44:23.904812 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-25 04:44:23.904824 | orchestrator | Wednesday 25 March 2026 04:44:22 +0000 (0:00:08.209) 0:08:41.980 ******* 2026-03-25 04:44:23.904878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-25 04:44:23.904893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-25 04:44:23.904905 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:23.904918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-25 04:44:23.904953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-25 04:44:45.755746 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:45.755863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-25 04:44:45.755883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-25 04:44:45.755896 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:45.755907 | orchestrator | 2026-03-25 04:44:45.755919 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-25 04:44:45.755931 | orchestrator | Wednesday 25 March 2026 04:44:23 +0000 (0:00:01.772) 0:08:43.752 ******* 2026-03-25 04:44:45.755944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-25 04:44:45.755957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-25 04:44:45.755995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:44:45.756023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:44:45.756036 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:45.756055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-25 04:44:45.756073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-25 04:44:45.756114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:44:45.756136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:44:45.756154 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:45.756172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-25 04:44:45.756184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-25 04:44:45.756196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:44:45.756207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-25 04:44:45.756218 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:45.756231 | orchestrator | 2026-03-25 04:44:45.756244 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-25 04:44:45.756256 | orchestrator | Wednesday 25 March 2026 04:44:26 +0000 (0:00:02.123) 0:08:45.875 ******* 2026-03-25 04:44:45.756269 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:44:45.756286 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:44:45.756305 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:44:45.756325 | orchestrator | 2026-03-25 04:44:45.756345 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-25 04:44:45.756376 | orchestrator | Wednesday 25 March 2026 04:44:28 +0000 (0:00:02.319) 0:08:48.195 ******* 2026-03-25 04:44:45.756389 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:44:45.756401 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:44:45.756413 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:44:45.756425 | orchestrator | 2026-03-25 04:44:45.756438 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-25 04:44:45.756450 | orchestrator | Wednesday 25 March 2026 04:44:31 +0000 (0:00:03.270) 0:08:51.466 ******* 2026-03-25 04:44:45.756462 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:45.756474 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:45.756486 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:45.756498 | orchestrator | 2026-03-25 04:44:45.756511 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-25 04:44:45.756522 | orchestrator | Wednesday 25 March 2026 04:44:33 +0000 (0:00:01.494) 0:08:52.961 ******* 2026-03-25 04:44:45.756535 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:45.756546 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:45.756559 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:45.756578 | orchestrator | 2026-03-25 04:44:45.756595 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-25 04:44:45.756614 | orchestrator | Wednesday 25 March 2026 04:44:34 +0000 (0:00:01.427) 0:08:54.388 ******* 2026-03-25 04:44:45.756632 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:45.756681 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:45.756700 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:45.756719 | orchestrator | 2026-03-25 04:44:45.756736 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-25 04:44:45.756755 | orchestrator | Wednesday 25 March 2026 04:44:36 +0000 (0:00:01.778) 0:08:56.167 ******* 2026-03-25 04:44:45.756774 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:45.756785 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:45.756796 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:45.756807 | orchestrator | 2026-03-25 04:44:45.756817 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-25 04:44:45.756827 | orchestrator | Wednesday 25 March 2026 04:44:37 +0000 (0:00:01.373) 0:08:57.541 ******* 2026-03-25 04:44:45.756838 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:45.756848 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:44:45.756859 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:44:45.756869 | orchestrator | 2026-03-25 04:44:45.756880 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-25 04:44:45.756890 | orchestrator | Wednesday 25 March 2026 04:44:39 +0000 (0:00:01.350) 0:08:58.892 ******* 2026-03-25 04:44:45.756902 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:44:45.756914 | orchestrator | 2026-03-25 04:44:45.756924 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-25 04:44:45.756935 | orchestrator | Wednesday 25 March 2026 04:44:41 +0000 (0:00:02.711) 0:09:01.603 ******* 2026-03-25 04:44:45.756959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-25 04:44:49.832983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-25 04:44:49.833113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-25 04:44:49.833130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:44:49.833142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:44:49.833168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-25 04:44:49.833181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:44:49.833212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:44:49.833234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-25 04:44:49.833246 | orchestrator | 2026-03-25 04:44:49.833259 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-25 04:44:49.833271 | orchestrator | Wednesday 25 March 2026 04:44:45 +0000 (0:00:04.001) 0:09:05.604 ******* 2026-03-25 04:44:49.833283 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 04:44:49.833296 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:44:49.833307 | orchestrator | } 2026-03-25 04:44:49.833318 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 04:44:49.833329 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:44:49.833339 | orchestrator | } 2026-03-25 04:44:49.833350 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 04:44:49.833361 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:44:49.833371 | orchestrator | } 2026-03-25 04:44:49.833382 | orchestrator | 2026-03-25 04:44:49.833393 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 04:44:49.833404 | orchestrator | Wednesday 25 March 2026 04:44:47 +0000 (0:00:01.564) 0:09:07.169 ******* 2026-03-25 04:44:49.833415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-25 04:44:49.833433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:44:49.833445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:44:49.833456 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:44:49.833468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-25 04:44:49.833496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:46:51.001559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:46:51.001688 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:46:51.001711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-25 04:46:51.001726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-25 04:46:51.001757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-25 04:46:51.001771 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:46:51.001783 | orchestrator | 2026-03-25 04:46:51.001795 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-25 04:46:51.001807 | orchestrator | Wednesday 25 March 2026 04:44:49 +0000 (0:00:02.509) 0:09:09.679 ******* 2026-03-25 04:46:51.001818 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:46:51.001830 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:46:51.001865 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:46:51.001876 | orchestrator | 2026-03-25 04:46:51.001888 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-25 04:46:51.001898 | orchestrator | Wednesday 25 March 2026 04:44:51 +0000 (0:00:01.794) 0:09:11.473 ******* 2026-03-25 04:46:51.001956 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:46:51.001967 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:46:51.001978 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:46:51.001990 | orchestrator | 2026-03-25 04:46:51.002002 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-25 04:46:51.002013 | orchestrator | Wednesday 25 March 2026 04:44:53 +0000 (0:00:01.396) 0:09:12.870 ******* 2026-03-25 04:46:51.002065 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:46:51.002073 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:46:51.002080 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:46:51.002087 | orchestrator | 2026-03-25 04:46:51.002096 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-25 04:46:51.002104 | orchestrator | Wednesday 25 March 2026 04:45:00 +0000 (0:00:07.083) 0:09:19.953 ******* 2026-03-25 04:46:51.002112 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:46:51.002120 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:46:51.002128 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:46:51.002135 | orchestrator | 2026-03-25 04:46:51.002144 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-25 04:46:51.002151 | orchestrator | Wednesday 25 March 2026 04:45:07 +0000 (0:00:07.541) 0:09:27.495 ******* 2026-03-25 04:46:51.002159 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:46:51.002166 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:46:51.002174 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:46:51.002182 | orchestrator | 2026-03-25 04:46:51.002189 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-25 04:46:51.002197 | orchestrator | Wednesday 25 March 2026 04:45:14 +0000 (0:00:07.056) 0:09:34.552 ******* 2026-03-25 04:46:51.002205 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:46:51.002212 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:46:51.002220 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:46:51.002228 | orchestrator | 2026-03-25 04:46:51.002253 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-25 04:46:51.002262 | orchestrator | Wednesday 25 March 2026 04:45:22 +0000 (0:00:07.778) 0:09:42.330 ******* 2026-03-25 04:46:51.002270 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:46:51.002277 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:46:51.002285 | orchestrator | 2026-03-25 04:46:51.002293 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-25 04:46:51.002301 | orchestrator | Wednesday 25 March 2026 04:45:26 +0000 (0:00:03.777) 0:09:46.108 ******* 2026-03-25 04:46:51.002309 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:46:51.002319 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:46:51.002331 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:46:51.002341 | orchestrator | 2026-03-25 04:46:51.002351 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-25 04:46:51.002363 | orchestrator | Wednesday 25 March 2026 04:45:39 +0000 (0:00:13.525) 0:09:59.633 ******* 2026-03-25 04:46:51.002372 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:46:51.002383 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:46:51.002393 | orchestrator | 2026-03-25 04:46:51.002404 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-25 04:46:51.002416 | orchestrator | Wednesday 25 March 2026 04:45:43 +0000 (0:00:03.753) 0:10:03.386 ******* 2026-03-25 04:46:51.002427 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:46:51.002439 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:46:51.002447 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:46:51.002453 | orchestrator | 2026-03-25 04:46:51.002460 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-25 04:46:51.002479 | orchestrator | Wednesday 25 March 2026 04:45:51 +0000 (0:00:07.560) 0:10:10.947 ******* 2026-03-25 04:46:51.002486 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:46:51.002492 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:46:51.002499 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:46:51.002505 | orchestrator | 2026-03-25 04:46:51.002512 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-25 04:46:51.002518 | orchestrator | Wednesday 25 March 2026 04:45:57 +0000 (0:00:06.801) 0:10:17.749 ******* 2026-03-25 04:46:51.002524 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:46:51.002530 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:46:51.002536 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:46:51.002542 | orchestrator | 2026-03-25 04:46:51.002548 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-25 04:46:51.002554 | orchestrator | Wednesday 25 March 2026 04:46:04 +0000 (0:00:06.869) 0:10:24.618 ******* 2026-03-25 04:46:51.002560 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:46:51.002566 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:46:51.002572 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:46:51.002578 | orchestrator | 2026-03-25 04:46:51.002584 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-25 04:46:51.002590 | orchestrator | Wednesday 25 March 2026 04:46:11 +0000 (0:00:06.971) 0:10:31.590 ******* 2026-03-25 04:46:51.002597 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:46:51.002603 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:46:51.002609 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:46:51.002615 | orchestrator | 2026-03-25 04:46:51.002627 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-03-25 04:46:51.002633 | orchestrator | Wednesday 25 March 2026 04:46:18 +0000 (0:00:07.125) 0:10:38.715 ******* 2026-03-25 04:46:51.002641 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:46:51.002651 | orchestrator | 2026-03-25 04:46:51.002658 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-25 04:46:51.002664 | orchestrator | Wednesday 25 March 2026 04:46:22 +0000 (0:00:03.620) 0:10:42.335 ******* 2026-03-25 04:46:51.002670 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:46:51.002676 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:46:51.002682 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:46:51.002688 | orchestrator | 2026-03-25 04:46:51.002694 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-03-25 04:46:51.002700 | orchestrator | Wednesday 25 March 2026 04:46:35 +0000 (0:00:13.075) 0:10:55.411 ******* 2026-03-25 04:46:51.002706 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:46:51.002712 | orchestrator | 2026-03-25 04:46:51.002719 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-25 04:46:51.002725 | orchestrator | Wednesday 25 March 2026 04:46:39 +0000 (0:00:03.677) 0:10:59.089 ******* 2026-03-25 04:46:51.002731 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:46:51.002737 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:46:51.002743 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:46:51.002749 | orchestrator | 2026-03-25 04:46:51.002755 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-25 04:46:51.002762 | orchestrator | Wednesday 25 March 2026 04:46:46 +0000 (0:00:06.977) 0:11:06.066 ******* 2026-03-25 04:46:51.002768 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:46:51.002774 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:46:51.002780 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:46:51.002786 | orchestrator | 2026-03-25 04:46:51.002792 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-25 04:46:51.002798 | orchestrator | Wednesday 25 March 2026 04:46:48 +0000 (0:00:01.986) 0:11:08.053 ******* 2026-03-25 04:46:51.002804 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:46:51.002810 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:46:51.002816 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:46:51.002822 | orchestrator | 2026-03-25 04:46:51.002833 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:46:51.002840 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-25 04:46:51.002848 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-25 04:46:51.002861 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-25 04:46:51.922678 | orchestrator | 2026-03-25 04:46:51.922804 | orchestrator | 2026-03-25 04:46:51.922821 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:46:51.922834 | orchestrator | Wednesday 25 March 2026 04:46:50 +0000 (0:00:02.791) 0:11:10.844 ******* 2026-03-25 04:46:51.922845 | orchestrator | =============================================================================== 2026-03-25 04:46:51.922855 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.53s 2026-03-25 04:46:51.922866 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.08s 2026-03-25 04:46:51.922879 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.21s 2026-03-25 04:46:51.922929 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.78s 2026-03-25 04:46:51.922944 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.56s 2026-03-25 04:46:51.922956 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.54s 2026-03-25 04:46:51.922967 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.48s 2026-03-25 04:46:51.922978 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.13s 2026-03-25 04:46:51.923072 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.08s 2026-03-25 04:46:51.923084 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.06s 2026-03-25 04:46:51.923095 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.98s 2026-03-25 04:46:51.923105 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.97s 2026-03-25 04:46:51.923116 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.88s 2026-03-25 04:46:51.923126 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.87s 2026-03-25 04:46:51.923137 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.80s 2026-03-25 04:46:51.923148 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.49s 2026-03-25 04:46:51.923158 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 6.08s 2026-03-25 04:46:51.923169 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.98s 2026-03-25 04:46:51.923179 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.81s 2026-03-25 04:46:51.923190 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.72s 2026-03-25 04:46:52.226340 | orchestrator | + osism apply -a upgrade opensearch 2026-03-25 04:46:54.377193 | orchestrator | 2026-03-25 04:46:54 | INFO  | Task aa282296-4422-43d8-bc28-f0cb3222c87a (opensearch) was prepared for execution. 2026-03-25 04:46:54.377335 | orchestrator | 2026-03-25 04:46:54 | INFO  | It takes a moment until task aa282296-4422-43d8-bc28-f0cb3222c87a (opensearch) has been started and output is visible here. 2026-03-25 04:47:14.390280 | orchestrator | 2026-03-25 04:47:14.390401 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 04:47:14.390419 | orchestrator | 2026-03-25 04:47:14.390431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 04:47:14.390443 | orchestrator | Wednesday 25 March 2026 04:47:00 +0000 (0:00:01.782) 0:00:01.782 ******* 2026-03-25 04:47:14.390482 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:47:14.390495 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:47:14.390507 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:47:14.390517 | orchestrator | 2026-03-25 04:47:14.390529 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 04:47:14.390541 | orchestrator | Wednesday 25 March 2026 04:47:02 +0000 (0:00:01.716) 0:00:03.499 ******* 2026-03-25 04:47:14.390553 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-25 04:47:14.390564 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-25 04:47:14.390575 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-25 04:47:14.390586 | orchestrator | 2026-03-25 04:47:14.390597 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-25 04:47:14.390608 | orchestrator | 2026-03-25 04:47:14.390619 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-25 04:47:14.390629 | orchestrator | Wednesday 25 March 2026 04:47:05 +0000 (0:00:03.445) 0:00:06.944 ******* 2026-03-25 04:47:14.390641 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:47:14.390653 | orchestrator | 2026-03-25 04:47:14.390663 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-25 04:47:14.390673 | orchestrator | Wednesday 25 March 2026 04:47:07 +0000 (0:00:02.166) 0:00:09.111 ******* 2026-03-25 04:47:14.390684 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-25 04:47:14.390694 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-25 04:47:14.390705 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-25 04:47:14.390716 | orchestrator | 2026-03-25 04:47:14.390727 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-25 04:47:14.390811 | orchestrator | Wednesday 25 March 2026 04:47:10 +0000 (0:00:02.365) 0:00:11.476 ******* 2026-03-25 04:47:14.390828 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:14.390850 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:14.390902 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:14.390932 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:14.390970 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:14.390986 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:14.391005 | orchestrator | 2026-03-25 04:47:14.391018 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-25 04:47:14.391031 | orchestrator | Wednesday 25 March 2026 04:47:12 +0000 (0:00:02.396) 0:00:13.873 ******* 2026-03-25 04:47:14.391050 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:47:14.391062 | orchestrator | 2026-03-25 04:47:14.391140 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-25 04:47:20.248702 | orchestrator | Wednesday 25 March 2026 04:47:14 +0000 (0:00:01.707) 0:00:15.581 ******* 2026-03-25 04:47:20.248918 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:20.248952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:20.249018 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:20.249048 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:20.249105 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:20.249117 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:20.249126 | orchestrator | 2026-03-25 04:47:20.249135 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-25 04:47:20.249145 | orchestrator | Wednesday 25 March 2026 04:47:18 +0000 (0:00:03.851) 0:00:19.432 ******* 2026-03-25 04:47:20.249153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:47:20.249178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:47:22.223156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:47:22.223262 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:47:22.223280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:47:22.223291 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:47:22.223302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:47:22.223370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:47:22.223384 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:47:22.223394 | orchestrator | 2026-03-25 04:47:22.223405 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-25 04:47:22.223416 | orchestrator | Wednesday 25 March 2026 04:47:20 +0000 (0:00:02.013) 0:00:21.446 ******* 2026-03-25 04:47:22.223426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:47:22.223437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:47:22.223457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:47:22.223468 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:47:22.223491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:47:26.111657 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:47:26.111764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:47:26.111785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:47:26.111822 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:47:26.111834 | orchestrator | 2026-03-25 04:47:26.111845 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-25 04:47:26.111856 | orchestrator | Wednesday 25 March 2026 04:47:22 +0000 (0:00:01.970) 0:00:23.416 ******* 2026-03-25 04:47:26.111882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:26.111911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:26.111923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:26.111934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:26.111957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:26.112059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:40.332168 | orchestrator | 2026-03-25 04:47:40.332279 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-25 04:47:40.332296 | orchestrator | Wednesday 25 March 2026 04:47:26 +0000 (0:00:03.886) 0:00:27.303 ******* 2026-03-25 04:47:40.332309 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:47:40.332321 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:47:40.332332 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:47:40.332342 | orchestrator | 2026-03-25 04:47:40.332354 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-25 04:47:40.332365 | orchestrator | Wednesday 25 March 2026 04:47:29 +0000 (0:00:03.535) 0:00:30.839 ******* 2026-03-25 04:47:40.332375 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:47:40.332412 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:47:40.332423 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:47:40.332434 | orchestrator | 2026-03-25 04:47:40.332445 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-25 04:47:40.332456 | orchestrator | Wednesday 25 March 2026 04:47:32 +0000 (0:00:03.295) 0:00:34.135 ******* 2026-03-25 04:47:40.332469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:40.332484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:40.332511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-25 04:47:40.332543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:40.332567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:40.332586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-25 04:47:40.332598 | orchestrator | 2026-03-25 04:47:40.332610 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-25 04:47:40.332621 | orchestrator | Wednesday 25 March 2026 04:47:36 +0000 (0:00:03.705) 0:00:37.840 ******* 2026-03-25 04:47:40.332632 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 04:47:40.332644 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:47:40.332655 | orchestrator | } 2026-03-25 04:47:40.332666 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 04:47:40.332677 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:47:40.332688 | orchestrator | } 2026-03-25 04:47:40.332700 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 04:47:40.332712 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:47:40.332725 | orchestrator | } 2026-03-25 04:47:40.332737 | orchestrator | 2026-03-25 04:47:40.332750 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 04:47:40.332762 | orchestrator | Wednesday 25 March 2026 04:47:38 +0000 (0:00:01.393) 0:00:39.234 ******* 2026-03-25 04:47:40.332783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:50:50.005081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:50:50.005197 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:50:50.005217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:50:50.005249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:50:50.005285 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:50:50.005315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-25 04:50:50.005328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-25 04:50:50.005426 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:50:50.005439 | orchestrator | 2026-03-25 04:50:50.005451 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-25 04:50:50.005463 | orchestrator | Wednesday 25 March 2026 04:47:40 +0000 (0:00:02.290) 0:00:41.525 ******* 2026-03-25 04:50:50.005474 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:50:50.005485 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:50:50.005495 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:50:50.005506 | orchestrator | 2026-03-25 04:50:50.005517 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-25 04:50:50.005527 | orchestrator | Wednesday 25 March 2026 04:47:41 +0000 (0:00:01.554) 0:00:43.079 ******* 2026-03-25 04:50:50.005538 | orchestrator | 2026-03-25 04:50:50.005549 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-25 04:50:50.005560 | orchestrator | Wednesday 25 March 2026 04:47:42 +0000 (0:00:00.433) 0:00:43.514 ******* 2026-03-25 04:50:50.005570 | orchestrator | 2026-03-25 04:50:50.005581 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-25 04:50:50.005592 | orchestrator | Wednesday 25 March 2026 04:47:42 +0000 (0:00:00.526) 0:00:44.040 ******* 2026-03-25 04:50:50.005602 | orchestrator | 2026-03-25 04:50:50.005614 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-25 04:50:50.005633 | orchestrator | Wednesday 25 March 2026 04:47:43 +0000 (0:00:00.841) 0:00:44.881 ******* 2026-03-25 04:50:50.005645 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:50:50.005658 | orchestrator | 2026-03-25 04:50:50.005671 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-25 04:50:50.005683 | orchestrator | Wednesday 25 March 2026 04:47:47 +0000 (0:00:03.649) 0:00:48.530 ******* 2026-03-25 04:50:50.005695 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:50:50.005721 | orchestrator | 2026-03-25 04:50:50.005734 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-25 04:50:50.005746 | orchestrator | Wednesday 25 March 2026 04:47:51 +0000 (0:00:04.555) 0:00:53.086 ******* 2026-03-25 04:50:50.005759 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:50:50.005771 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:50:50.005782 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:50:50.005793 | orchestrator | 2026-03-25 04:50:50.005804 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-25 04:50:50.005814 | orchestrator | Wednesday 25 March 2026 04:49:10 +0000 (0:01:19.052) 0:02:12.138 ******* 2026-03-25 04:50:50.005825 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:50:50.005836 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:50:50.005846 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:50:50.005857 | orchestrator | 2026-03-25 04:50:50.005868 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-25 04:50:50.005878 | orchestrator | Wednesday 25 March 2026 04:50:40 +0000 (0:01:29.323) 0:03:41.461 ******* 2026-03-25 04:50:50.005890 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:50:50.005902 | orchestrator | 2026-03-25 04:50:50.005923 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-25 04:50:50.005944 | orchestrator | Wednesday 25 March 2026 04:50:42 +0000 (0:00:01.785) 0:03:43.246 ******* 2026-03-25 04:50:50.005963 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:50:50.005984 | orchestrator | 2026-03-25 04:50:50.006005 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-25 04:50:50.006086 | orchestrator | Wednesday 25 March 2026 04:50:45 +0000 (0:00:03.342) 0:03:46.589 ******* 2026-03-25 04:50:50.006099 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:50:50.006109 | orchestrator | 2026-03-25 04:50:50.006120 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-25 04:50:50.006130 | orchestrator | Wednesday 25 March 2026 04:50:48 +0000 (0:00:03.345) 0:03:49.935 ******* 2026-03-25 04:50:50.006151 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:50:50.006162 | orchestrator | 2026-03-25 04:50:50.006172 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-25 04:50:50.006194 | orchestrator | Wednesday 25 March 2026 04:50:49 +0000 (0:00:01.259) 0:03:51.194 ******* 2026-03-25 04:50:52.765441 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:50:52.765545 | orchestrator | 2026-03-25 04:50:52.765562 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:50:52.765576 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 04:50:52.765589 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 04:50:52.765600 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 04:50:52.765611 | orchestrator | 2026-03-25 04:50:52.765622 | orchestrator | 2026-03-25 04:50:52.765633 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:50:52.765644 | orchestrator | Wednesday 25 March 2026 04:50:52 +0000 (0:00:02.358) 0:03:53.553 ******* 2026-03-25 04:50:52.765655 | orchestrator | =============================================================================== 2026-03-25 04:50:52.765665 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 89.32s 2026-03-25 04:50:52.765748 | orchestrator | opensearch : Restart opensearch container ------------------------------ 79.05s 2026-03-25 04:50:52.765764 | orchestrator | opensearch : Perform a flush -------------------------------------------- 4.56s 2026-03-25 04:50:52.765775 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.89s 2026-03-25 04:50:52.765818 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.85s 2026-03-25 04:50:52.765830 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.71s 2026-03-25 04:50:52.765840 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.65s 2026-03-25 04:50:52.765851 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.54s 2026-03-25 04:50:52.765862 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.45s 2026-03-25 04:50:52.765873 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.35s 2026-03-25 04:50:52.765884 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.34s 2026-03-25 04:50:52.765895 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.30s 2026-03-25 04:50:52.765906 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.40s 2026-03-25 04:50:52.765917 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.37s 2026-03-25 04:50:52.765930 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.36s 2026-03-25 04:50:52.765942 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.29s 2026-03-25 04:50:52.765954 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.17s 2026-03-25 04:50:52.765981 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 2.01s 2026-03-25 04:50:52.765995 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.97s 2026-03-25 04:50:52.766007 | orchestrator | opensearch : Flush handlers --------------------------------------------- 1.80s 2026-03-25 04:50:53.121172 | orchestrator | + osism apply -a upgrade memcached 2026-03-25 04:50:55.290919 | orchestrator | 2026-03-25 04:50:55 | INFO  | Task 51c23669-4ece-4360-94ee-876c7a73c861 (memcached) was prepared for execution. 2026-03-25 04:50:55.291173 | orchestrator | 2026-03-25 04:50:55 | INFO  | It takes a moment until task 51c23669-4ece-4360-94ee-876c7a73c861 (memcached) has been started and output is visible here. 2026-03-25 04:51:30.238774 | orchestrator | 2026-03-25 04:51:30.238876 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 04:51:30.238892 | orchestrator | 2026-03-25 04:51:30.238900 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 04:51:30.238908 | orchestrator | Wednesday 25 March 2026 04:51:01 +0000 (0:00:01.909) 0:00:01.909 ******* 2026-03-25 04:51:30.238916 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:51:30.238926 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:51:30.238934 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:51:30.238943 | orchestrator | 2026-03-25 04:51:30.238948 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 04:51:30.238953 | orchestrator | Wednesday 25 March 2026 04:51:03 +0000 (0:00:02.264) 0:00:04.173 ******* 2026-03-25 04:51:30.238959 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-25 04:51:30.238964 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-25 04:51:30.238969 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-25 04:51:30.238974 | orchestrator | 2026-03-25 04:51:30.238979 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-25 04:51:30.238983 | orchestrator | 2026-03-25 04:51:30.238988 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-25 04:51:30.238992 | orchestrator | Wednesday 25 March 2026 04:51:06 +0000 (0:00:02.540) 0:00:06.714 ******* 2026-03-25 04:51:30.238998 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:51:30.239003 | orchestrator | 2026-03-25 04:51:30.239008 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-25 04:51:30.239013 | orchestrator | Wednesday 25 March 2026 04:51:08 +0000 (0:00:02.401) 0:00:09.115 ******* 2026-03-25 04:51:30.239035 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-25 04:51:30.239040 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-25 04:51:30.239045 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-25 04:51:30.239053 | orchestrator | 2026-03-25 04:51:30.239060 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-25 04:51:30.239066 | orchestrator | Wednesday 25 March 2026 04:51:10 +0000 (0:00:01.833) 0:00:10.948 ******* 2026-03-25 04:51:30.239073 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-25 04:51:30.239080 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-25 04:51:30.239088 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-25 04:51:30.239096 | orchestrator | 2026-03-25 04:51:30.239102 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-25 04:51:30.239107 | orchestrator | Wednesday 25 March 2026 04:51:13 +0000 (0:00:02.777) 0:00:13.725 ******* 2026-03-25 04:51:30.239114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-25 04:51:30.239122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-25 04:51:30.239152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-25 04:51:30.239158 | orchestrator | 2026-03-25 04:51:30.239163 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-25 04:51:30.239167 | orchestrator | Wednesday 25 March 2026 04:51:15 +0000 (0:00:02.332) 0:00:16.058 ******* 2026-03-25 04:51:30.239172 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 04:51:30.239177 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:51:30.239182 | orchestrator | } 2026-03-25 04:51:30.239186 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 04:51:30.239191 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:51:30.239195 | orchestrator | } 2026-03-25 04:51:30.239200 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 04:51:30.239204 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:51:30.239213 | orchestrator | } 2026-03-25 04:51:30.239218 | orchestrator | 2026-03-25 04:51:30.239222 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 04:51:30.239227 | orchestrator | Wednesday 25 March 2026 04:51:17 +0000 (0:00:01.391) 0:00:17.450 ******* 2026-03-25 04:51:30.239232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-25 04:51:30.239237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-25 04:51:30.239242 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:51:30.239246 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:51:30.239251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-25 04:51:30.239255 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:51:30.239260 | orchestrator | 2026-03-25 04:51:30.239265 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-25 04:51:30.239269 | orchestrator | Wednesday 25 March 2026 04:51:19 +0000 (0:00:02.062) 0:00:19.513 ******* 2026-03-25 04:51:30.239274 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:51:30.239278 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:51:30.239282 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:51:30.239287 | orchestrator | 2026-03-25 04:51:30.239291 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:51:30.239300 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 04:51:30.239306 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 04:51:30.239311 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 04:51:30.239320 | orchestrator | 2026-03-25 04:51:30.239326 | orchestrator | 2026-03-25 04:51:30.239331 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:51:30.239340 | orchestrator | Wednesday 25 March 2026 04:51:30 +0000 (0:00:11.069) 0:00:30.582 ******* 2026-03-25 04:51:30.580020 | orchestrator | =============================================================================== 2026-03-25 04:51:30.580119 | orchestrator | memcached : Restart memcached container -------------------------------- 11.07s 2026-03-25 04:51:30.580135 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.78s 2026-03-25 04:51:30.580147 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.54s 2026-03-25 04:51:30.580159 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.40s 2026-03-25 04:51:30.580170 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.33s 2026-03-25 04:51:30.580182 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.26s 2026-03-25 04:51:30.580193 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.06s 2026-03-25 04:51:30.580205 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.83s 2026-03-25 04:51:30.580216 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.39s 2026-03-25 04:51:30.923772 | orchestrator | + osism apply -a upgrade redis 2026-03-25 04:51:33.145228 | orchestrator | 2026-03-25 04:51:33 | INFO  | Task f24c69a8-3f03-478c-8423-e11fa7c3082c (redis) was prepared for execution. 2026-03-25 04:51:33.145332 | orchestrator | 2026-03-25 04:51:33 | INFO  | It takes a moment until task f24c69a8-3f03-478c-8423-e11fa7c3082c (redis) has been started and output is visible here. 2026-03-25 04:51:51.105928 | orchestrator | 2026-03-25 04:51:51.106083 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 04:51:51.106100 | orchestrator | 2026-03-25 04:51:51.106110 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 04:51:51.106121 | orchestrator | Wednesday 25 March 2026 04:51:38 +0000 (0:00:01.413) 0:00:01.414 ******* 2026-03-25 04:51:51.106131 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:51:51.106141 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:51:51.106151 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:51:51.106161 | orchestrator | 2026-03-25 04:51:51.106170 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 04:51:51.106180 | orchestrator | Wednesday 25 March 2026 04:51:40 +0000 (0:00:02.113) 0:00:03.527 ******* 2026-03-25 04:51:51.106189 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-25 04:51:51.106199 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-25 04:51:51.106208 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-25 04:51:51.106218 | orchestrator | 2026-03-25 04:51:51.106227 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-25 04:51:51.106238 | orchestrator | 2026-03-25 04:51:51.106248 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-25 04:51:51.106258 | orchestrator | Wednesday 25 March 2026 04:51:42 +0000 (0:00:01.717) 0:00:05.245 ******* 2026-03-25 04:51:51.106267 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:51:51.106277 | orchestrator | 2026-03-25 04:51:51.106287 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-25 04:51:51.106297 | orchestrator | Wednesday 25 March 2026 04:51:45 +0000 (0:00:02.760) 0:00:08.005 ******* 2026-03-25 04:51:51.106309 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106361 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106373 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106384 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106414 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106425 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106435 | orchestrator | 2026-03-25 04:51:51.106482 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-25 04:51:51.106499 | orchestrator | Wednesday 25 March 2026 04:51:47 +0000 (0:00:02.442) 0:00:10.448 ******* 2026-03-25 04:51:51.106519 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106547 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106559 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106571 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:51.106591 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.352693 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.352823 | orchestrator | 2026-03-25 04:51:58.352848 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-25 04:51:58.352867 | orchestrator | Wednesday 25 March 2026 04:51:51 +0000 (0:00:03.173) 0:00:13.621 ******* 2026-03-25 04:51:58.352938 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.352991 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353016 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353034 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353051 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353092 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353111 | orchestrator | 2026-03-25 04:51:58.353128 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-25 04:51:58.353144 | orchestrator | Wednesday 25 March 2026 04:51:55 +0000 (0:00:03.936) 0:00:17.558 ******* 2026-03-25 04:51:58.353172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:51:58.353291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-25 04:52:26.557676 | orchestrator | 2026-03-25 04:52:26.557791 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-25 04:52:26.557808 | orchestrator | Wednesday 25 March 2026 04:51:58 +0000 (0:00:03.309) 0:00:20.867 ******* 2026-03-25 04:52:26.557822 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 04:52:26.557834 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:52:26.557845 | orchestrator | } 2026-03-25 04:52:26.557857 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 04:52:26.557867 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:52:26.557878 | orchestrator | } 2026-03-25 04:52:26.557889 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 04:52:26.557899 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:52:26.557910 | orchestrator | } 2026-03-25 04:52:26.557921 | orchestrator | 2026-03-25 04:52:26.557933 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 04:52:26.557944 | orchestrator | Wednesday 25 March 2026 04:51:59 +0000 (0:00:01.623) 0:00:22.491 ******* 2026-03-25 04:52:26.557957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-25 04:52:26.557987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-25 04:52:26.558000 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:52:26.558012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-25 04:52:26.558088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-25 04:52:26.558101 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:52:26.558113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-25 04:52:26.558179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-25 04:52:26.558194 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:52:26.558207 | orchestrator | 2026-03-25 04:52:26.558220 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-25 04:52:26.558233 | orchestrator | Wednesday 25 March 2026 04:52:01 +0000 (0:00:01.966) 0:00:24.457 ******* 2026-03-25 04:52:26.558245 | orchestrator | 2026-03-25 04:52:26.558257 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-25 04:52:26.558269 | orchestrator | Wednesday 25 March 2026 04:52:02 +0000 (0:00:00.463) 0:00:24.921 ******* 2026-03-25 04:52:26.558281 | orchestrator | 2026-03-25 04:52:26.558293 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-25 04:52:26.558305 | orchestrator | Wednesday 25 March 2026 04:52:02 +0000 (0:00:00.454) 0:00:25.375 ******* 2026-03-25 04:52:26.558318 | orchestrator | 2026-03-25 04:52:26.558330 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-25 04:52:26.558342 | orchestrator | Wednesday 25 March 2026 04:52:03 +0000 (0:00:00.815) 0:00:26.191 ******* 2026-03-25 04:52:26.558354 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:52:26.558367 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:52:26.558379 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:52:26.558391 | orchestrator | 2026-03-25 04:52:26.558404 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-25 04:52:26.558416 | orchestrator | Wednesday 25 March 2026 04:52:14 +0000 (0:00:11.298) 0:00:37.490 ******* 2026-03-25 04:52:26.558428 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:52:26.558441 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:52:26.558453 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:52:26.558466 | orchestrator | 2026-03-25 04:52:26.558485 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:52:26.558526 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 04:52:26.558540 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 04:52:26.558551 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 04:52:26.558562 | orchestrator | 2026-03-25 04:52:26.558572 | orchestrator | 2026-03-25 04:52:26.558583 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:52:26.558594 | orchestrator | Wednesday 25 March 2026 04:52:26 +0000 (0:00:11.155) 0:00:48.645 ******* 2026-03-25 04:52:26.558604 | orchestrator | =============================================================================== 2026-03-25 04:52:26.558615 | orchestrator | redis : Restart redis container ---------------------------------------- 11.30s 2026-03-25 04:52:26.558634 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.16s 2026-03-25 04:52:26.558644 | orchestrator | redis : Copying over redis config files --------------------------------- 3.94s 2026-03-25 04:52:26.558655 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.31s 2026-03-25 04:52:26.558666 | orchestrator | redis : Copying over default config.json files -------------------------- 3.17s 2026-03-25 04:52:26.558676 | orchestrator | redis : include_tasks --------------------------------------------------- 2.76s 2026-03-25 04:52:26.558687 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.44s 2026-03-25 04:52:26.558698 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.11s 2026-03-25 04:52:26.558708 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.97s 2026-03-25 04:52:26.558719 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.73s 2026-03-25 04:52:26.558729 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.72s 2026-03-25 04:52:26.558740 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.62s 2026-03-25 04:52:26.886009 | orchestrator | + osism apply -a upgrade mariadb 2026-03-25 04:52:29.035855 | orchestrator | 2026-03-25 04:52:29 | INFO  | Task d10e07a3-fd63-4eb9-a9b9-c1ed04b43c34 (mariadb) was prepared for execution. 2026-03-25 04:52:29.035941 | orchestrator | 2026-03-25 04:52:29 | INFO  | It takes a moment until task d10e07a3-fd63-4eb9-a9b9-c1ed04b43c34 (mariadb) has been started and output is visible here. 2026-03-25 04:52:55.563961 | orchestrator | 2026-03-25 04:52:55.564106 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 04:52:55.564123 | orchestrator | 2026-03-25 04:52:55.564134 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 04:52:55.564144 | orchestrator | Wednesday 25 March 2026 04:52:34 +0000 (0:00:01.300) 0:00:01.300 ******* 2026-03-25 04:52:55.564154 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:52:55.564165 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:52:55.564175 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:52:55.564185 | orchestrator | 2026-03-25 04:52:55.564195 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 04:52:55.564205 | orchestrator | Wednesday 25 March 2026 04:52:36 +0000 (0:00:02.101) 0:00:03.402 ******* 2026-03-25 04:52:55.564215 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-25 04:52:55.564225 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-25 04:52:55.564235 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-25 04:52:55.564244 | orchestrator | 2026-03-25 04:52:55.564254 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-25 04:52:55.564263 | orchestrator | 2026-03-25 04:52:55.564273 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-25 04:52:55.564283 | orchestrator | Wednesday 25 March 2026 04:52:38 +0000 (0:00:01.910) 0:00:05.313 ******* 2026-03-25 04:52:55.564293 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 04:52:55.564302 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-25 04:52:55.564312 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-25 04:52:55.564321 | orchestrator | 2026-03-25 04:52:55.564331 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-25 04:52:55.564340 | orchestrator | Wednesday 25 March 2026 04:52:40 +0000 (0:00:01.603) 0:00:06.917 ******* 2026-03-25 04:52:55.564358 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:52:55.564376 | orchestrator | 2026-03-25 04:52:55.564393 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-25 04:52:55.564409 | orchestrator | Wednesday 25 March 2026 04:52:43 +0000 (0:00:02.908) 0:00:09.826 ******* 2026-03-25 04:52:55.564459 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 04:52:55.564569 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 04:52:55.564595 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 04:52:55.564616 | orchestrator | 2026-03-25 04:52:55.564628 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-25 04:52:55.564639 | orchestrator | Wednesday 25 March 2026 04:52:47 +0000 (0:00:04.023) 0:00:13.850 ******* 2026-03-25 04:52:55.564650 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:52:55.564662 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:52:55.564673 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:52:55.564684 | orchestrator | 2026-03-25 04:52:55.564695 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-25 04:52:55.564706 | orchestrator | Wednesday 25 March 2026 04:52:48 +0000 (0:00:01.597) 0:00:15.447 ******* 2026-03-25 04:52:55.564717 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:52:55.564743 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:52:55.564754 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:52:55.564775 | orchestrator | 2026-03-25 04:52:55.564786 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-25 04:52:55.564796 | orchestrator | Wednesday 25 March 2026 04:52:51 +0000 (0:00:02.262) 0:00:17.709 ******* 2026-03-25 04:52:55.564817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 04:53:08.402614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 04:53:08.402743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 04:53:08.402786 | orchestrator | 2026-03-25 04:53:08.402801 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-25 04:53:08.402814 | orchestrator | Wednesday 25 March 2026 04:52:55 +0000 (0:00:04.454) 0:00:22.164 ******* 2026-03-25 04:53:08.402825 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:08.402837 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:08.402847 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:53:08.402859 | orchestrator | 2026-03-25 04:53:08.402871 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-25 04:53:08.402900 | orchestrator | Wednesday 25 March 2026 04:52:57 +0000 (0:00:02.213) 0:00:24.378 ******* 2026-03-25 04:53:08.402912 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:53:08.402923 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:53:08.402934 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:53:08.402944 | orchestrator | 2026-03-25 04:53:08.402955 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-25 04:53:08.402966 | orchestrator | Wednesday 25 March 2026 04:53:03 +0000 (0:00:05.248) 0:00:29.627 ******* 2026-03-25 04:53:08.402978 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:53:08.402989 | orchestrator | 2026-03-25 04:53:08.403000 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-25 04:53:08.403011 | orchestrator | Wednesday 25 March 2026 04:53:04 +0000 (0:00:01.909) 0:00:31.536 ******* 2026-03-25 04:53:08.403031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:08.403044 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:08.403064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:16.180045 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:16.180176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:16.180197 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:16.180210 | orchestrator | 2026-03-25 04:53:16.180222 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-25 04:53:16.180233 | orchestrator | Wednesday 25 March 2026 04:53:08 +0000 (0:00:03.464) 0:00:35.001 ******* 2026-03-25 04:53:16.180246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:16.180282 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:16.180318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:16.180333 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:16.180344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:16.180365 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:16.180375 | orchestrator | 2026-03-25 04:53:16.180386 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-25 04:53:16.180397 | orchestrator | Wednesday 25 March 2026 04:53:11 +0000 (0:00:03.556) 0:00:38.558 ******* 2026-03-25 04:53:16.180423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:20.583875 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:20.583972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:20.584011 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:20.584033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:20.584042 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:20.584051 | orchestrator | 2026-03-25 04:53:20.584059 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-25 04:53:20.584069 | orchestrator | Wednesday 25 March 2026 04:53:16 +0000 (0:00:04.220) 0:00:42.778 ******* 2026-03-25 04:53:20.584093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 04:53:20.584114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 04:53:20.584132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-25 04:53:36.205968 | orchestrator | 2026-03-25 04:53:36.206122 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-25 04:53:36.206139 | orchestrator | Wednesday 25 March 2026 04:53:20 +0000 (0:00:04.409) 0:00:47.188 ******* 2026-03-25 04:53:36.206149 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 04:53:36.206160 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:53:36.206168 | orchestrator | } 2026-03-25 04:53:36.206177 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 04:53:36.206185 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:53:36.206194 | orchestrator | } 2026-03-25 04:53:36.206202 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 04:53:36.206210 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 04:53:36.206218 | orchestrator | } 2026-03-25 04:53:36.206227 | orchestrator | 2026-03-25 04:53:36.206235 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 04:53:36.206243 | orchestrator | Wednesday 25 March 2026 04:53:22 +0000 (0:00:01.432) 0:00:48.621 ******* 2026-03-25 04:53:36.206271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:36.206284 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:36.206312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:36.206346 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:36.206359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:36.206369 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:36.206377 | orchestrator | 2026-03-25 04:53:36.206385 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-25 04:53:36.206392 | orchestrator | Wednesday 25 March 2026 04:53:26 +0000 (0:00:04.088) 0:00:52.709 ******* 2026-03-25 04:53:36.206401 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:36.206409 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:36.206417 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:36.206424 | orchestrator | 2026-03-25 04:53:36.206432 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-25 04:53:36.206446 | orchestrator | Wednesday 25 March 2026 04:53:27 +0000 (0:00:01.433) 0:00:54.142 ******* 2026-03-25 04:53:36.206454 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:36.206462 | orchestrator | 2026-03-25 04:53:36.206470 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-25 04:53:36.206478 | orchestrator | Wednesday 25 March 2026 04:53:28 +0000 (0:00:01.140) 0:00:55.283 ******* 2026-03-25 04:53:36.206485 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:36.206494 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:36.206503 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:36.206512 | orchestrator | 2026-03-25 04:53:36.206521 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-25 04:53:36.206531 | orchestrator | Wednesday 25 March 2026 04:53:30 +0000 (0:00:01.563) 0:00:56.846 ******* 2026-03-25 04:53:36.206540 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:36.206549 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:36.206558 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:36.206567 | orchestrator | 2026-03-25 04:53:36.206576 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-25 04:53:36.206586 | orchestrator | Wednesday 25 March 2026 04:53:31 +0000 (0:00:01.601) 0:00:58.448 ******* 2026-03-25 04:53:36.206595 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:36.206604 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:36.206638 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:36.206647 | orchestrator | 2026-03-25 04:53:36.206656 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-25 04:53:36.206665 | orchestrator | Wednesday 25 March 2026 04:53:33 +0000 (0:00:01.491) 0:00:59.939 ******* 2026-03-25 04:53:36.206674 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:36.206683 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:36.206691 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:36.206700 | orchestrator | 2026-03-25 04:53:36.206709 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-25 04:53:36.206718 | orchestrator | Wednesday 25 March 2026 04:53:34 +0000 (0:00:01.441) 0:01:01.381 ******* 2026-03-25 04:53:36.206726 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:36.206736 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:36.206745 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:36.206754 | orchestrator | 2026-03-25 04:53:36.206771 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-25 04:53:54.058205 | orchestrator | Wednesday 25 March 2026 04:53:36 +0000 (0:00:01.423) 0:01:02.805 ******* 2026-03-25 04:53:54.058322 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.058339 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.058351 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.058362 | orchestrator | 2026-03-25 04:53:54.058374 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-25 04:53:54.058385 | orchestrator | Wednesday 25 March 2026 04:53:37 +0000 (0:00:01.596) 0:01:04.401 ******* 2026-03-25 04:53:54.058396 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 04:53:54.058407 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 04:53:54.058418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 04:53:54.058429 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.058440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 04:53:54.058451 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 04:53:54.058461 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 04:53:54.058472 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.058483 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 04:53:54.058493 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 04:53:54.058504 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 04:53:54.058539 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.058551 | orchestrator | 2026-03-25 04:53:54.058563 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-25 04:53:54.058574 | orchestrator | Wednesday 25 March 2026 04:53:39 +0000 (0:00:01.392) 0:01:05.794 ******* 2026-03-25 04:53:54.058584 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.058595 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.058607 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.058626 | orchestrator | 2026-03-25 04:53:54.058714 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-25 04:53:54.058736 | orchestrator | Wednesday 25 March 2026 04:53:40 +0000 (0:00:01.385) 0:01:07.180 ******* 2026-03-25 04:53:54.058757 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.058776 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.058797 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.058811 | orchestrator | 2026-03-25 04:53:54.058825 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-25 04:53:54.058854 | orchestrator | Wednesday 25 March 2026 04:53:41 +0000 (0:00:01.352) 0:01:08.532 ******* 2026-03-25 04:53:54.058867 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.058879 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.058892 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.058904 | orchestrator | 2026-03-25 04:53:54.058916 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-25 04:53:54.058929 | orchestrator | Wednesday 25 March 2026 04:53:43 +0000 (0:00:01.608) 0:01:10.140 ******* 2026-03-25 04:53:54.058942 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.058954 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.058966 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.058979 | orchestrator | 2026-03-25 04:53:54.058991 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-25 04:53:54.059004 | orchestrator | Wednesday 25 March 2026 04:53:44 +0000 (0:00:01.397) 0:01:11.538 ******* 2026-03-25 04:53:54.059016 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.059028 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.059040 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.059053 | orchestrator | 2026-03-25 04:53:54.059066 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-25 04:53:54.059079 | orchestrator | Wednesday 25 March 2026 04:53:46 +0000 (0:00:01.395) 0:01:12.934 ******* 2026-03-25 04:53:54.059092 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.059105 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.059116 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.059126 | orchestrator | 2026-03-25 04:53:54.059137 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-25 04:53:54.059147 | orchestrator | Wednesday 25 March 2026 04:53:47 +0000 (0:00:01.560) 0:01:14.495 ******* 2026-03-25 04:53:54.059158 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.059169 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.059179 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.059190 | orchestrator | 2026-03-25 04:53:54.059201 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-25 04:53:54.059211 | orchestrator | Wednesday 25 March 2026 04:53:49 +0000 (0:00:01.369) 0:01:15.864 ******* 2026-03-25 04:53:54.059223 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.059233 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.059244 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:53:54.059255 | orchestrator | 2026-03-25 04:53:54.059266 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-25 04:53:54.059276 | orchestrator | Wednesday 25 March 2026 04:53:50 +0000 (0:00:01.336) 0:01:17.201 ******* 2026-03-25 04:53:54.059317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:54.059344 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:53:54.059361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:53:54.059379 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:53:54.059412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:54:11.175920 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:54:11.176054 | orchestrator | 2026-03-25 04:54:11.176082 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-25 04:54:11.176103 | orchestrator | Wednesday 25 March 2026 04:53:54 +0000 (0:00:03.456) 0:01:20.658 ******* 2026-03-25 04:54:11.176122 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:54:11.176142 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:54:11.176185 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:54:11.176204 | orchestrator | 2026-03-25 04:54:11.176216 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-25 04:54:11.176227 | orchestrator | Wednesday 25 March 2026 04:53:55 +0000 (0:00:01.652) 0:01:22.311 ******* 2026-03-25 04:54:11.176261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:54:11.176302 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:54:11.176335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:54:11.176348 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:54:11.176366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-25 04:54:11.176378 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:54:11.176389 | orchestrator | 2026-03-25 04:54:11.176400 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-25 04:54:11.176418 | orchestrator | Wednesday 25 March 2026 04:53:59 +0000 (0:00:03.490) 0:01:25.801 ******* 2026-03-25 04:54:11.176429 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:54:11.176439 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:54:11.176452 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:54:11.176464 | orchestrator | 2026-03-25 04:54:11.176476 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-25 04:54:11.176488 | orchestrator | Wednesday 25 March 2026 04:54:00 +0000 (0:00:01.797) 0:01:27.599 ******* 2026-03-25 04:54:11.176501 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:54:11.176514 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:54:11.176525 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:54:11.176537 | orchestrator | 2026-03-25 04:54:11.176552 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-25 04:54:11.176572 | orchestrator | Wednesday 25 March 2026 04:54:02 +0000 (0:00:01.417) 0:01:29.017 ******* 2026-03-25 04:54:11.176599 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:54:11.176620 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:54:11.176638 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:54:11.176656 | orchestrator | 2026-03-25 04:54:11.176703 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-25 04:54:11.176719 | orchestrator | Wednesday 25 March 2026 04:54:03 +0000 (0:00:01.535) 0:01:30.552 ******* 2026-03-25 04:54:11.176737 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:54:11.176755 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:54:11.176773 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:54:11.176792 | orchestrator | 2026-03-25 04:54:11.176813 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-25 04:54:11.176830 | orchestrator | Wednesday 25 March 2026 04:54:05 +0000 (0:00:01.840) 0:01:32.393 ******* 2026-03-25 04:54:11.176848 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:54:11.176859 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:54:11.176870 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:54:11.176880 | orchestrator | 2026-03-25 04:54:11.176891 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-25 04:54:11.176901 | orchestrator | Wednesday 25 March 2026 04:54:07 +0000 (0:00:01.900) 0:01:34.294 ******* 2026-03-25 04:54:11.176912 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:54:11.176923 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:54:11.176934 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:54:11.176944 | orchestrator | 2026-03-25 04:54:11.176955 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-25 04:54:11.176965 | orchestrator | Wednesday 25 March 2026 04:54:09 +0000 (0:00:01.895) 0:01:36.189 ******* 2026-03-25 04:54:11.176976 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:54:11.176992 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:54:11.177010 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:54:11.177028 | orchestrator | 2026-03-25 04:54:11.177045 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-25 04:54:11.177062 | orchestrator | Wednesday 25 March 2026 04:54:10 +0000 (0:00:01.335) 0:01:37.524 ******* 2026-03-25 04:54:11.177094 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.407306 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.407423 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.407439 | orchestrator | 2026-03-25 04:56:49.407452 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-25 04:56:49.407465 | orchestrator | Wednesday 25 March 2026 04:54:12 +0000 (0:00:01.381) 0:01:38.906 ******* 2026-03-25 04:56:49.407476 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.407487 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.407498 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.407508 | orchestrator | 2026-03-25 04:56:49.407519 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-25 04:56:49.407530 | orchestrator | Wednesday 25 March 2026 04:54:14 +0000 (0:00:02.078) 0:01:40.985 ******* 2026-03-25 04:56:49.407567 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.407579 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.407590 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.407600 | orchestrator | 2026-03-25 04:56:49.407610 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-25 04:56:49.407621 | orchestrator | Wednesday 25 March 2026 04:54:15 +0000 (0:00:01.405) 0:01:42.391 ******* 2026-03-25 04:56:49.407632 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:56:49.407659 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.407671 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.407681 | orchestrator | 2026-03-25 04:56:49.407692 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-25 04:56:49.407702 | orchestrator | Wednesday 25 March 2026 04:54:17 +0000 (0:00:01.442) 0:01:43.833 ******* 2026-03-25 04:56:49.407713 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.407723 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.407733 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.407744 | orchestrator | 2026-03-25 04:56:49.407754 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-25 04:56:49.407765 | orchestrator | Wednesday 25 March 2026 04:54:20 +0000 (0:00:03.663) 0:01:47.496 ******* 2026-03-25 04:56:49.407776 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.407786 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.407796 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.407806 | orchestrator | 2026-03-25 04:56:49.407817 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-25 04:56:49.407827 | orchestrator | Wednesday 25 March 2026 04:54:22 +0000 (0:00:01.432) 0:01:48.929 ******* 2026-03-25 04:56:49.407838 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.407851 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.407862 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.407874 | orchestrator | 2026-03-25 04:56:49.407886 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-25 04:56:49.407899 | orchestrator | Wednesday 25 March 2026 04:54:23 +0000 (0:00:01.363) 0:01:50.293 ******* 2026-03-25 04:56:49.407911 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:56:49.407948 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.407961 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.407972 | orchestrator | 2026-03-25 04:56:49.407984 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-25 04:56:49.407997 | orchestrator | Wednesday 25 March 2026 04:54:25 +0000 (0:00:01.820) 0:01:52.114 ******* 2026-03-25 04:56:49.408009 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:56:49.408021 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.408033 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.408045 | orchestrator | 2026-03-25 04:56:49.408057 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-25 04:56:49.408068 | orchestrator | Wednesday 25 March 2026 04:54:27 +0000 (0:00:01.663) 0:01:53.777 ******* 2026-03-25 04:56:49.408080 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:56:49.408092 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.408104 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.408117 | orchestrator | 2026-03-25 04:56:49.408129 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-25 04:56:49.408141 | orchestrator | Wednesday 25 March 2026 04:54:28 +0000 (0:00:01.578) 0:01:55.356 ******* 2026-03-25 04:56:49.408152 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:56:49.408162 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:56:49.408173 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:56:49.408183 | orchestrator | 2026-03-25 04:56:49.408194 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-25 04:56:49.408204 | orchestrator | Wednesday 25 March 2026 04:54:30 +0000 (0:00:01.720) 0:01:57.076 ******* 2026-03-25 04:56:49.408215 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:56:49.408234 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.408244 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.408255 | orchestrator | 2026-03-25 04:56:49.408265 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-25 04:56:49.408276 | orchestrator | 2026-03-25 04:56:49.408286 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-25 04:56:49.408297 | orchestrator | Wednesday 25 March 2026 04:54:32 +0000 (0:00:01.662) 0:01:58.739 ******* 2026-03-25 04:56:49.408307 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:56:49.408318 | orchestrator | 2026-03-25 04:56:49.408329 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-25 04:56:49.408339 | orchestrator | Wednesday 25 March 2026 04:54:59 +0000 (0:00:27.180) 0:02:25.920 ******* 2026-03-25 04:56:49.408350 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.408360 | orchestrator | 2026-03-25 04:56:49.408371 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-25 04:56:49.408381 | orchestrator | Wednesday 25 March 2026 04:55:03 +0000 (0:00:04.674) 0:02:30.594 ******* 2026-03-25 04:56:49.408392 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.408402 | orchestrator | 2026-03-25 04:56:49.408413 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-25 04:56:49.408423 | orchestrator | 2026-03-25 04:56:49.408433 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-25 04:56:49.408444 | orchestrator | Wednesday 25 March 2026 04:55:06 +0000 (0:00:03.000) 0:02:33.595 ******* 2026-03-25 04:56:49.408455 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:56:49.408465 | orchestrator | 2026-03-25 04:56:49.408476 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-25 04:56:49.408502 | orchestrator | Wednesday 25 March 2026 04:55:32 +0000 (0:00:26.019) 0:02:59.614 ******* 2026-03-25 04:56:49.408514 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.408524 | orchestrator | 2026-03-25 04:56:49.408535 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-25 04:56:49.408546 | orchestrator | Wednesday 25 March 2026 04:55:38 +0000 (0:00:05.501) 0:03:05.115 ******* 2026-03-25 04:56:49.408556 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.408567 | orchestrator | 2026-03-25 04:56:49.408578 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-25 04:56:49.408589 | orchestrator | 2026-03-25 04:56:49.408599 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-25 04:56:49.408610 | orchestrator | Wednesday 25 March 2026 04:55:41 +0000 (0:00:03.249) 0:03:08.364 ******* 2026-03-25 04:56:49.408620 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:56:49.408631 | orchestrator | 2026-03-25 04:56:49.408641 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-25 04:56:49.408652 | orchestrator | Wednesday 25 March 2026 04:56:07 +0000 (0:00:26.094) 0:03:34.459 ******* 2026-03-25 04:56:49.408668 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-03-25 04:56:49.408680 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.408690 | orchestrator | 2026-03-25 04:56:49.408701 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-25 04:56:49.408712 | orchestrator | Wednesday 25 March 2026 04:56:15 +0000 (0:00:07.979) 0:03:42.439 ******* 2026-03-25 04:56:49.408722 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-25 04:56:49.408733 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-25 04:56:49.408743 | orchestrator | mariadb_bootstrap_restart 2026-03-25 04:56:49.408754 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.408764 | orchestrator | 2026-03-25 04:56:49.408775 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-25 04:56:49.408785 | orchestrator | skipping: no hosts matched 2026-03-25 04:56:49.408796 | orchestrator | 2026-03-25 04:56:49.408806 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-25 04:56:49.408823 | orchestrator | skipping: no hosts matched 2026-03-25 04:56:49.408834 | orchestrator | 2026-03-25 04:56:49.408845 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-25 04:56:49.408855 | orchestrator | 2026-03-25 04:56:49.408866 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-25 04:56:49.408876 | orchestrator | Wednesday 25 March 2026 04:56:19 +0000 (0:00:04.127) 0:03:46.566 ******* 2026-03-25 04:56:49.408887 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:56:49.408897 | orchestrator | 2026-03-25 04:56:49.408908 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-25 04:56:49.408941 | orchestrator | Wednesday 25 March 2026 04:56:21 +0000 (0:00:01.930) 0:03:48.497 ******* 2026-03-25 04:56:49.408952 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.408963 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.408974 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.408984 | orchestrator | 2026-03-25 04:56:49.408994 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-25 04:56:49.409005 | orchestrator | Wednesday 25 March 2026 04:56:24 +0000 (0:00:03.109) 0:03:51.606 ******* 2026-03-25 04:56:49.409016 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.409026 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.409037 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:56:49.409047 | orchestrator | 2026-03-25 04:56:49.409069 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-25 04:56:49.409080 | orchestrator | Wednesday 25 March 2026 04:56:28 +0000 (0:00:03.180) 0:03:54.787 ******* 2026-03-25 04:56:49.409091 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.409102 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.409112 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.409123 | orchestrator | 2026-03-25 04:56:49.409134 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-25 04:56:49.409144 | orchestrator | Wednesday 25 March 2026 04:56:31 +0000 (0:00:03.102) 0:03:57.889 ******* 2026-03-25 04:56:49.409155 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.409166 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.409176 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:56:49.409187 | orchestrator | 2026-03-25 04:56:49.409198 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-25 04:56:49.409208 | orchestrator | Wednesday 25 March 2026 04:56:34 +0000 (0:00:03.399) 0:04:01.289 ******* 2026-03-25 04:56:49.409219 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.409229 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.409240 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.409250 | orchestrator | 2026-03-25 04:56:49.409261 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-25 04:56:49.409272 | orchestrator | Wednesday 25 March 2026 04:56:40 +0000 (0:00:06.322) 0:04:07.612 ******* 2026-03-25 04:56:49.409282 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:56:49.409293 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.409303 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.409314 | orchestrator | 2026-03-25 04:56:49.409325 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-25 04:56:49.409335 | orchestrator | Wednesday 25 March 2026 04:56:44 +0000 (0:00:03.490) 0:04:11.103 ******* 2026-03-25 04:56:49.409346 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:56:49.409356 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:56:49.409367 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:56:49.409377 | orchestrator | 2026-03-25 04:56:49.409388 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-25 04:56:49.409398 | orchestrator | Wednesday 25 March 2026 04:56:46 +0000 (0:00:01.577) 0:04:12.680 ******* 2026-03-25 04:56:49.409409 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:56:49.409420 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:56:49.409437 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:56:49.409448 | orchestrator | 2026-03-25 04:56:49.409458 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-25 04:56:49.409479 | orchestrator | Wednesday 25 March 2026 04:56:49 +0000 (0:00:03.321) 0:04:16.001 ******* 2026-03-25 04:57:09.798289 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:57:09.798427 | orchestrator | 2026-03-25 04:57:09.798449 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-03-25 04:57:09.798462 | orchestrator | Wednesday 25 March 2026 04:56:51 +0000 (0:00:01.985) 0:04:17.987 ******* 2026-03-25 04:57:09.798473 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:57:09.798486 | orchestrator | changed: [testbed-node-2] 2026-03-25 04:57:09.798497 | orchestrator | changed: [testbed-node-1] 2026-03-25 04:57:09.798507 | orchestrator | 2026-03-25 04:57:09.798518 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 04:57:09.798530 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-25 04:57:09.798543 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-25 04:57:09.798576 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-25 04:57:09.798588 | orchestrator | 2026-03-25 04:57:09.798599 | orchestrator | 2026-03-25 04:57:09.798610 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 04:57:09.798621 | orchestrator | Wednesday 25 March 2026 04:57:09 +0000 (0:00:17.942) 0:04:35.930 ******* 2026-03-25 04:57:09.798632 | orchestrator | =============================================================================== 2026-03-25 04:57:09.798643 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 79.29s 2026-03-25 04:57:09.798653 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 18.16s 2026-03-25 04:57:09.798664 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.94s 2026-03-25 04:57:09.798675 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.38s 2026-03-25 04:57:09.798686 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.32s 2026-03-25 04:57:09.798696 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.25s 2026-03-25 04:57:09.798707 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.45s 2026-03-25 04:57:09.798718 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.41s 2026-03-25 04:57:09.798728 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.22s 2026-03-25 04:57:09.798739 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.09s 2026-03-25 04:57:09.798750 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.02s 2026-03-25 04:57:09.798760 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.66s 2026-03-25 04:57:09.798771 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.56s 2026-03-25 04:57:09.798782 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.49s 2026-03-25 04:57:09.798793 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.49s 2026-03-25 04:57:09.798804 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.46s 2026-03-25 04:57:09.798814 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.46s 2026-03-25 04:57:09.798825 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.40s 2026-03-25 04:57:09.798836 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.32s 2026-03-25 04:57:09.798873 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.18s 2026-03-25 04:57:10.125197 | orchestrator | + osism apply -a upgrade rabbitmq 2026-03-25 04:57:12.262836 | orchestrator | 2026-03-25 04:57:12 | INFO  | Task 124d17f5-c42b-4bfb-9630-7bde7a687e29 (rabbitmq) was prepared for execution. 2026-03-25 04:57:12.262936 | orchestrator | 2026-03-25 04:57:12 | INFO  | It takes a moment until task 124d17f5-c42b-4bfb-9630-7bde7a687e29 (rabbitmq) has been started and output is visible here. 2026-03-25 04:57:56.443406 | orchestrator | 2026-03-25 04:57:56.443553 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 04:57:56.443584 | orchestrator | 2026-03-25 04:57:56.443601 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 04:57:56.443613 | orchestrator | Wednesday 25 March 2026 04:57:18 +0000 (0:00:01.434) 0:00:01.434 ******* 2026-03-25 04:57:56.443624 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:57:56.443636 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:57:56.443647 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:57:56.443658 | orchestrator | 2026-03-25 04:57:56.443669 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 04:57:56.443680 | orchestrator | Wednesday 25 March 2026 04:57:19 +0000 (0:00:01.799) 0:00:03.233 ******* 2026-03-25 04:57:56.443691 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-25 04:57:56.443703 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-25 04:57:56.443713 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-25 04:57:56.443725 | orchestrator | 2026-03-25 04:57:56.443736 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-25 04:57:56.443747 | orchestrator | 2026-03-25 04:57:56.443758 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-25 04:57:56.443768 | orchestrator | Wednesday 25 March 2026 04:57:21 +0000 (0:00:01.814) 0:00:05.048 ******* 2026-03-25 04:57:56.443780 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:57:56.443791 | orchestrator | 2026-03-25 04:57:56.443802 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-25 04:57:56.443813 | orchestrator | Wednesday 25 March 2026 04:57:24 +0000 (0:00:02.896) 0:00:07.945 ******* 2026-03-25 04:57:56.443824 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:57:56.443834 | orchestrator | 2026-03-25 04:57:56.443845 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-25 04:57:56.443856 | orchestrator | Wednesday 25 March 2026 04:57:26 +0000 (0:00:02.279) 0:00:10.224 ******* 2026-03-25 04:57:56.443867 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:57:56.443877 | orchestrator | 2026-03-25 04:57:56.443888 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-25 04:57:56.443899 | orchestrator | Wednesday 25 March 2026 04:57:30 +0000 (0:00:03.190) 0:00:13.415 ******* 2026-03-25 04:57:56.443910 | orchestrator | changed: [testbed-node-0] 2026-03-25 04:57:56.443922 | orchestrator | 2026-03-25 04:57:56.443933 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-25 04:57:56.443962 | orchestrator | Wednesday 25 March 2026 04:57:40 +0000 (0:00:09.921) 0:00:23.337 ******* 2026-03-25 04:57:56.443976 | orchestrator | ok: [testbed-node-0] => { 2026-03-25 04:57:56.443988 | orchestrator |  "changed": false, 2026-03-25 04:57:56.444000 | orchestrator |  "msg": "All assertions passed" 2026-03-25 04:57:56.444014 | orchestrator | } 2026-03-25 04:57:56.444069 | orchestrator | 2026-03-25 04:57:56.444082 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-25 04:57:56.444095 | orchestrator | Wednesday 25 March 2026 04:57:41 +0000 (0:00:01.314) 0:00:24.651 ******* 2026-03-25 04:57:56.444108 | orchestrator | ok: [testbed-node-0] => { 2026-03-25 04:57:56.444120 | orchestrator |  "changed": false, 2026-03-25 04:57:56.444133 | orchestrator |  "msg": "All assertions passed" 2026-03-25 04:57:56.444145 | orchestrator | } 2026-03-25 04:57:56.444179 | orchestrator | 2026-03-25 04:57:56.444190 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-25 04:57:56.444201 | orchestrator | Wednesday 25 March 2026 04:57:43 +0000 (0:00:01.759) 0:00:26.411 ******* 2026-03-25 04:57:56.444212 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:57:56.444223 | orchestrator | 2026-03-25 04:57:56.444233 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-25 04:57:56.444244 | orchestrator | Wednesday 25 March 2026 04:57:44 +0000 (0:00:01.749) 0:00:28.160 ******* 2026-03-25 04:57:56.444255 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:57:56.444265 | orchestrator | 2026-03-25 04:57:56.444276 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-25 04:57:56.444286 | orchestrator | Wednesday 25 March 2026 04:57:47 +0000 (0:00:02.314) 0:00:30.475 ******* 2026-03-25 04:57:56.444297 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:57:56.444308 | orchestrator | 2026-03-25 04:57:56.444319 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-25 04:57:56.444329 | orchestrator | Wednesday 25 March 2026 04:57:50 +0000 (0:00:03.113) 0:00:33.589 ******* 2026-03-25 04:57:56.444340 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:57:56.444350 | orchestrator | 2026-03-25 04:57:56.444361 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-25 04:57:56.444372 | orchestrator | Wednesday 25 March 2026 04:57:52 +0000 (0:00:01.915) 0:00:35.504 ******* 2026-03-25 04:57:56.444411 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:57:56.444429 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:57:56.444448 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:57:56.444469 | orchestrator | 2026-03-25 04:57:56.444480 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-25 04:57:56.444491 | orchestrator | Wednesday 25 March 2026 04:57:53 +0000 (0:00:01.750) 0:00:37.255 ******* 2026-03-25 04:57:56.444502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:57:56.444523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:58:16.837677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:58:16.837853 | orchestrator | 2026-03-25 04:58:16.837907 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-25 04:58:16.837932 | orchestrator | Wednesday 25 March 2026 04:57:56 +0000 (0:00:02.450) 0:00:39.706 ******* 2026-03-25 04:58:16.837952 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-25 04:58:16.837972 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-25 04:58:16.837991 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-25 04:58:16.838011 | orchestrator | 2026-03-25 04:58:16.838122 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-25 04:58:16.838135 | orchestrator | Wednesday 25 March 2026 04:57:58 +0000 (0:00:02.416) 0:00:42.122 ******* 2026-03-25 04:58:16.838145 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-25 04:58:16.838156 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-25 04:58:16.838166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-25 04:58:16.838177 | orchestrator | 2026-03-25 04:58:16.838188 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-25 04:58:16.838246 | orchestrator | Wednesday 25 March 2026 04:58:01 +0000 (0:00:02.994) 0:00:45.117 ******* 2026-03-25 04:58:16.838267 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-25 04:58:16.838285 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-25 04:58:16.838303 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-25 04:58:16.838322 | orchestrator | 2026-03-25 04:58:16.838342 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-25 04:58:16.838360 | orchestrator | Wednesday 25 March 2026 04:58:04 +0000 (0:00:02.394) 0:00:47.511 ******* 2026-03-25 04:58:16.838378 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-25 04:58:16.838396 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-25 04:58:16.838415 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-25 04:58:16.838432 | orchestrator | 2026-03-25 04:58:16.838451 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-25 04:58:16.838470 | orchestrator | Wednesday 25 March 2026 04:58:06 +0000 (0:00:02.329) 0:00:49.841 ******* 2026-03-25 04:58:16.838489 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-25 04:58:16.838507 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-25 04:58:16.838525 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-25 04:58:16.838543 | orchestrator | 2026-03-25 04:58:16.838560 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-25 04:58:16.838579 | orchestrator | Wednesday 25 March 2026 04:58:08 +0000 (0:00:02.341) 0:00:52.183 ******* 2026-03-25 04:58:16.838598 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-25 04:58:16.838617 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-25 04:58:16.838635 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-25 04:58:16.838652 | orchestrator | 2026-03-25 04:58:16.838663 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-25 04:58:16.838688 | orchestrator | Wednesday 25 March 2026 04:58:12 +0000 (0:00:03.640) 0:00:55.824 ******* 2026-03-25 04:58:16.838699 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 04:58:16.838710 | orchestrator | 2026-03-25 04:58:16.838744 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-25 04:58:16.838755 | orchestrator | Wednesday 25 March 2026 04:58:14 +0000 (0:00:01.756) 0:00:57.581 ******* 2026-03-25 04:58:16.838777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:58:16.838792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:58:16.838805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:58:16.838816 | orchestrator | 2026-03-25 04:58:16.838827 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-25 04:58:16.838838 | orchestrator | Wednesday 25 March 2026 04:58:16 +0000 (0:00:02.285) 0:00:59.866 ******* 2026-03-25 04:58:16.838866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 04:58:26.119683 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:58:26.119853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 04:58:26.119890 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:58:26.119915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 04:58:26.119935 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:58:26.119954 | orchestrator | 2026-03-25 04:58:26.119974 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-25 04:58:26.119994 | orchestrator | Wednesday 25 March 2026 04:58:18 +0000 (0:00:01.586) 0:01:01.453 ******* 2026-03-25 04:58:26.120016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 04:58:26.120067 | orchestrator | skipping: [testbed-node-0] 2026-03-25 04:58:26.120156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 04:58:26.120170 | orchestrator | skipping: [testbed-node-1] 2026-03-25 04:58:26.120189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 04:58:26.120201 | orchestrator | skipping: [testbed-node-2] 2026-03-25 04:58:26.120212 | orchestrator | 2026-03-25 04:58:26.120223 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-25 04:58:26.120234 | orchestrator | Wednesday 25 March 2026 04:58:20 +0000 (0:00:01.927) 0:01:03.380 ******* 2026-03-25 04:58:26.120245 | orchestrator | ok: [testbed-node-1] 2026-03-25 04:58:26.120257 | orchestrator | ok: [testbed-node-0] 2026-03-25 04:58:26.120268 | orchestrator | ok: [testbed-node-2] 2026-03-25 04:58:26.120279 | orchestrator | 2026-03-25 04:58:26.120289 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-25 04:58:26.120300 | orchestrator | Wednesday 25 March 2026 04:58:23 +0000 (0:00:03.747) 0:01:07.128 ******* 2026-03-25 04:58:26.120312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 04:58:26.120341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 05:00:12.112324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-25 05:00:12.112434 | orchestrator | 2026-03-25 05:00:12.112452 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-25 05:00:12.112465 | orchestrator | Wednesday 25 March 2026 04:58:26 +0000 (0:00:02.263) 0:01:09.391 ******* 2026-03-25 05:00:12.112477 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 05:00:12.112489 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:00:12.112500 | orchestrator | } 2026-03-25 05:00:12.112512 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 05:00:12.112523 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:00:12.112533 | orchestrator | } 2026-03-25 05:00:12.112544 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 05:00:12.112555 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:00:12.112565 | orchestrator | } 2026-03-25 05:00:12.112576 | orchestrator | 2026-03-25 05:00:12.112587 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 05:00:12.112707 | orchestrator | Wednesday 25 March 2026 04:58:27 +0000 (0:00:01.395) 0:01:10.787 ******* 2026-03-25 05:00:12.112732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 05:00:12.112753 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:00:12.112774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 05:00:12.112794 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:00:12.112846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-25 05:00:12.112868 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:00:12.112888 | orchestrator | 2026-03-25 05:00:12.112907 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-25 05:00:12.112927 | orchestrator | Wednesday 25 March 2026 04:58:29 +0000 (0:00:02.077) 0:01:12.864 ******* 2026-03-25 05:00:12.112945 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:00:12.112967 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:00:12.112987 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:00:12.113021 | orchestrator | 2026-03-25 05:00:12.113040 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-25 05:00:12.113061 | orchestrator | 2026-03-25 05:00:12.113081 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-25 05:00:12.113101 | orchestrator | Wednesday 25 March 2026 04:58:31 +0000 (0:00:02.335) 0:01:15.200 ******* 2026-03-25 05:00:12.113121 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:00:12.113143 | orchestrator | 2026-03-25 05:00:12.113164 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-25 05:00:12.113185 | orchestrator | Wednesday 25 March 2026 04:58:34 +0000 (0:00:02.171) 0:01:17.371 ******* 2026-03-25 05:00:12.113206 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:00:12.113226 | orchestrator | 2026-03-25 05:00:12.113287 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-25 05:00:12.113300 | orchestrator | Wednesday 25 March 2026 04:58:43 +0000 (0:00:09.803) 0:01:27.175 ******* 2026-03-25 05:00:12.113310 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:00:12.113321 | orchestrator | 2026-03-25 05:00:12.113332 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-25 05:00:12.113343 | orchestrator | Wednesday 25 March 2026 04:58:52 +0000 (0:00:09.094) 0:01:36.270 ******* 2026-03-25 05:00:12.113353 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:00:12.113364 | orchestrator | 2026-03-25 05:00:12.113375 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-25 05:00:12.113385 | orchestrator | 2026-03-25 05:00:12.113396 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-25 05:00:12.113407 | orchestrator | Wednesday 25 March 2026 04:59:02 +0000 (0:00:09.397) 0:01:45.668 ******* 2026-03-25 05:00:12.113417 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:00:12.113428 | orchestrator | 2026-03-25 05:00:12.113439 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-25 05:00:12.113449 | orchestrator | Wednesday 25 March 2026 04:59:04 +0000 (0:00:01.731) 0:01:47.400 ******* 2026-03-25 05:00:12.113460 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:00:12.113470 | orchestrator | 2026-03-25 05:00:12.113481 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-25 05:00:12.113492 | orchestrator | Wednesday 25 March 2026 04:59:13 +0000 (0:00:09.255) 0:01:56.656 ******* 2026-03-25 05:00:12.113503 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:00:12.113514 | orchestrator | 2026-03-25 05:00:12.113524 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-25 05:00:12.113535 | orchestrator | Wednesday 25 March 2026 04:59:27 +0000 (0:00:14.386) 0:02:11.042 ******* 2026-03-25 05:00:12.113546 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:00:12.113556 | orchestrator | 2026-03-25 05:00:12.113567 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-25 05:00:12.113578 | orchestrator | 2026-03-25 05:00:12.113589 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-25 05:00:12.113600 | orchestrator | Wednesday 25 March 2026 04:59:38 +0000 (0:00:10.991) 0:02:22.033 ******* 2026-03-25 05:00:12.113611 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:00:12.113621 | orchestrator | 2026-03-25 05:00:12.113632 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-25 05:00:12.113642 | orchestrator | Wednesday 25 March 2026 04:59:40 +0000 (0:00:01.739) 0:02:23.773 ******* 2026-03-25 05:00:12.113653 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:00:12.113664 | orchestrator | 2026-03-25 05:00:12.113674 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-25 05:00:12.113685 | orchestrator | Wednesday 25 March 2026 04:59:48 +0000 (0:00:08.403) 0:02:32.177 ******* 2026-03-25 05:00:12.113696 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:00:12.113706 | orchestrator | 2026-03-25 05:00:12.113717 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-25 05:00:12.113728 | orchestrator | Wednesday 25 March 2026 05:00:02 +0000 (0:00:13.954) 0:02:46.131 ******* 2026-03-25 05:00:12.113749 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:00:12.113760 | orchestrator | 2026-03-25 05:00:12.113770 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-25 05:00:12.113781 | orchestrator | 2026-03-25 05:00:12.113792 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-25 05:00:12.113814 | orchestrator | Wednesday 25 March 2026 05:00:12 +0000 (0:00:09.249) 0:02:55.381 ******* 2026-03-25 05:00:18.061868 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 05:00:18.061979 | orchestrator | 2026-03-25 05:00:18.061996 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-25 05:00:18.062008 | orchestrator | Wednesday 25 March 2026 05:00:13 +0000 (0:00:01.330) 0:02:56.711 ******* 2026-03-25 05:00:18.062083 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:00:18.062097 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:00:18.062108 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:00:18.062118 | orchestrator | 2026-03-25 05:00:18.062130 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 05:00:18.062160 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 05:00:18.062173 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 05:00:18.062184 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-25 05:00:18.062194 | orchestrator | 2026-03-25 05:00:18.062205 | orchestrator | 2026-03-25 05:00:18.062216 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 05:00:18.062227 | orchestrator | Wednesday 25 March 2026 05:00:17 +0000 (0:00:04.235) 0:03:00.947 ******* 2026-03-25 05:00:18.062237 | orchestrator | =============================================================================== 2026-03-25 05:00:18.062315 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.44s 2026-03-25 05:00:18.062339 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 29.64s 2026-03-25 05:00:18.062350 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 27.46s 2026-03-25 05:00:18.062361 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.92s 2026-03-25 05:00:18.062372 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.64s 2026-03-25 05:00:18.062382 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.24s 2026-03-25 05:00:18.062393 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.75s 2026-03-25 05:00:18.062406 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 3.64s 2026-03-25 05:00:18.062419 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.19s 2026-03-25 05:00:18.062431 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.11s 2026-03-25 05:00:18.062444 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.00s 2026-03-25 05:00:18.062455 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.90s 2026-03-25 05:00:18.062468 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.45s 2026-03-25 05:00:18.062480 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.42s 2026-03-25 05:00:18.062493 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.39s 2026-03-25 05:00:18.062506 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.34s 2026-03-25 05:00:18.062518 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 2.34s 2026-03-25 05:00:18.062531 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.33s 2026-03-25 05:00:18.062566 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.31s 2026-03-25 05:00:18.062579 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.29s 2026-03-25 05:00:18.388757 | orchestrator | + osism apply -a upgrade openvswitch 2026-03-25 05:00:20.553856 | orchestrator | 2026-03-25 05:00:20 | INFO  | Task 2e55b1fa-2bcc-4d02-ab4b-dec4a04a1763 (openvswitch) was prepared for execution. 2026-03-25 05:00:20.553957 | orchestrator | 2026-03-25 05:00:20 | INFO  | It takes a moment until task 2e55b1fa-2bcc-4d02-ab4b-dec4a04a1763 (openvswitch) has been started and output is visible here. 2026-03-25 05:00:48.025107 | orchestrator | 2026-03-25 05:00:48.025210 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 05:00:48.025221 | orchestrator | 2026-03-25 05:00:48.025229 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 05:00:48.025236 | orchestrator | Wednesday 25 March 2026 05:00:26 +0000 (0:00:02.045) 0:00:02.045 ******* 2026-03-25 05:00:48.025243 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:00:48.025250 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:00:48.025256 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:00:48.025263 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:00:48.025269 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:00:48.025275 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:00:48.025281 | orchestrator | 2026-03-25 05:00:48.025288 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 05:00:48.025336 | orchestrator | Wednesday 25 March 2026 05:00:29 +0000 (0:00:02.748) 0:00:04.794 ******* 2026-03-25 05:00:48.025359 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 05:00:48.025411 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 05:00:48.025420 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 05:00:48.025427 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 05:00:48.025434 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 05:00:48.025452 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-25 05:00:48.025458 | orchestrator | 2026-03-25 05:00:48.025465 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-25 05:00:48.025472 | orchestrator | 2026-03-25 05:00:48.025486 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-25 05:00:48.025493 | orchestrator | Wednesday 25 March 2026 05:00:31 +0000 (0:00:02.151) 0:00:06.945 ******* 2026-03-25 05:00:48.025514 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 05:00:48.025522 | orchestrator | 2026-03-25 05:00:48.025529 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-25 05:00:48.025535 | orchestrator | Wednesday 25 March 2026 05:00:35 +0000 (0:00:03.426) 0:00:10.372 ******* 2026-03-25 05:00:48.025542 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-25 05:00:48.025548 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-25 05:00:48.025554 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-25 05:00:48.025561 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-25 05:00:48.025567 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-25 05:00:48.025573 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-25 05:00:48.025579 | orchestrator | 2026-03-25 05:00:48.025586 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-25 05:00:48.025592 | orchestrator | Wednesday 25 March 2026 05:00:37 +0000 (0:00:02.288) 0:00:12.660 ******* 2026-03-25 05:00:48.025598 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-25 05:00:48.025605 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-25 05:00:48.025627 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-25 05:00:48.025634 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-25 05:00:48.025641 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-25 05:00:48.025647 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-25 05:00:48.025653 | orchestrator | 2026-03-25 05:00:48.025660 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-25 05:00:48.025667 | orchestrator | Wednesday 25 March 2026 05:00:40 +0000 (0:00:02.767) 0:00:15.427 ******* 2026-03-25 05:00:48.025674 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-25 05:00:48.025682 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:00:48.025690 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-25 05:00:48.025697 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:00:48.025705 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-25 05:00:48.025712 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:00:48.025719 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-25 05:00:48.025726 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:00:48.025734 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-25 05:00:48.025741 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:00:48.025748 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-25 05:00:48.025756 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:00:48.025764 | orchestrator | 2026-03-25 05:00:48.025771 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-25 05:00:48.025778 | orchestrator | Wednesday 25 March 2026 05:00:43 +0000 (0:00:02.696) 0:00:18.123 ******* 2026-03-25 05:00:48.025786 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:00:48.025794 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:00:48.025800 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:00:48.025806 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:00:48.025813 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:00:48.025819 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:00:48.025825 | orchestrator | 2026-03-25 05:00:48.025831 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-25 05:00:48.025838 | orchestrator | Wednesday 25 March 2026 05:00:45 +0000 (0:00:02.150) 0:00:20.274 ******* 2026-03-25 05:00:48.025863 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:48.025876 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:48.025886 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:48.025899 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:48.025906 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:48.025913 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:48.025925 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296041 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296168 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296186 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296198 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296210 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296223 | orchestrator | 2026-03-25 05:00:50.296235 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-25 05:00:50.296248 | orchestrator | Wednesday 25 March 2026 05:00:48 +0000 (0:00:02.815) 0:00:23.089 ******* 2026-03-25 05:00:50.296277 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296393 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296409 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296421 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296432 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296444 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:50.296465 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319158 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319263 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319277 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319287 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319297 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319365 | orchestrator | 2026-03-25 05:00:56.319385 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-25 05:00:56.319398 | orchestrator | Wednesday 25 March 2026 05:00:51 +0000 (0:00:03.574) 0:00:26.663 ******* 2026-03-25 05:00:56.319407 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:00:56.319418 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:00:56.319426 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:00:56.319482 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:00:56.319493 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:00:56.319503 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:00:56.319512 | orchestrator | 2026-03-25 05:00:56.319521 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-25 05:00:56.319547 | orchestrator | Wednesday 25 March 2026 05:00:54 +0000 (0:00:02.662) 0:00:29.326 ******* 2026-03-25 05:00:56.319564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:00:56.319667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-25 05:01:00.514546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:01:00.514650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:01:00.514665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:01:00.514677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:01:00.514712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:01:00.514757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-25 05:01:00.514772 | orchestrator | 2026-03-25 05:01:00.514785 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-25 05:01:00.514797 | orchestrator | Wednesday 25 March 2026 05:00:57 +0000 (0:00:03.566) 0:00:32.892 ******* 2026-03-25 05:01:00.514810 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 05:01:00.514822 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:01:00.514834 | orchestrator | } 2026-03-25 05:01:00.514845 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 05:01:00.514855 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:01:00.514866 | orchestrator | } 2026-03-25 05:01:00.514877 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 05:01:00.514888 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:01:00.514899 | orchestrator | } 2026-03-25 05:01:00.514910 | orchestrator | changed: [testbed-node-3] => { 2026-03-25 05:01:00.514920 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:01:00.514931 | orchestrator | } 2026-03-25 05:01:00.514942 | orchestrator | changed: [testbed-node-4] => { 2026-03-25 05:01:00.514953 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:01:00.514964 | orchestrator | } 2026-03-25 05:01:00.514974 | orchestrator | changed: [testbed-node-5] => { 2026-03-25 05:01:00.514985 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:01:00.514996 | orchestrator | } 2026-03-25 05:01:00.515007 | orchestrator | 2026-03-25 05:01:00.515018 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 05:01:00.515030 | orchestrator | Wednesday 25 March 2026 05:01:00 +0000 (0:00:02.261) 0:00:35.154 ******* 2026-03-25 05:01:00.515051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-25 05:01:00.515083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-25 05:01:00.515106 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:01:00.515128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-25 05:01:00.515155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-25 05:01:00.515181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-25 05:01:39.602426 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:01:39.602582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-25 05:01:39.602650 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:01:39.602674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-25 05:01:39.602697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-25 05:01:39.602718 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:01:39.602741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-25 05:01:39.602781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-25 05:01:39.602801 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:01:39.602850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-25 05:01:39.602874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-25 05:01:39.602912 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:01:39.602934 | orchestrator | 2026-03-25 05:01:39.602960 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 05:01:39.602985 | orchestrator | Wednesday 25 March 2026 05:01:02 +0000 (0:00:02.855) 0:00:38.010 ******* 2026-03-25 05:01:39.603009 | orchestrator | 2026-03-25 05:01:39.603032 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 05:01:39.603053 | orchestrator | Wednesday 25 March 2026 05:01:03 +0000 (0:00:00.561) 0:00:38.571 ******* 2026-03-25 05:01:39.603075 | orchestrator | 2026-03-25 05:01:39.603096 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 05:01:39.603115 | orchestrator | Wednesday 25 March 2026 05:01:04 +0000 (0:00:00.528) 0:00:39.100 ******* 2026-03-25 05:01:39.603136 | orchestrator | 2026-03-25 05:01:39.603155 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 05:01:39.603178 | orchestrator | Wednesday 25 March 2026 05:01:04 +0000 (0:00:00.537) 0:00:39.637 ******* 2026-03-25 05:01:39.603199 | orchestrator | 2026-03-25 05:01:39.603221 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 05:01:39.603240 | orchestrator | Wednesday 25 March 2026 05:01:05 +0000 (0:00:00.809) 0:00:40.446 ******* 2026-03-25 05:01:39.603261 | orchestrator | 2026-03-25 05:01:39.603279 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-25 05:01:39.603298 | orchestrator | Wednesday 25 March 2026 05:01:05 +0000 (0:00:00.532) 0:00:40.979 ******* 2026-03-25 05:01:39.603316 | orchestrator | 2026-03-25 05:01:39.603334 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-25 05:01:39.603354 | orchestrator | Wednesday 25 March 2026 05:01:06 +0000 (0:00:00.898) 0:00:41.878 ******* 2026-03-25 05:01:39.603416 | orchestrator | changed: [testbed-node-3] 2026-03-25 05:01:39.603440 | orchestrator | changed: [testbed-node-4] 2026-03-25 05:01:39.603459 | orchestrator | changed: [testbed-node-5] 2026-03-25 05:01:39.603476 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:01:39.603494 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:01:39.603514 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:01:39.603533 | orchestrator | 2026-03-25 05:01:39.603552 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-25 05:01:39.603572 | orchestrator | Wednesday 25 March 2026 05:01:18 +0000 (0:00:11.721) 0:00:53.599 ******* 2026-03-25 05:01:39.603589 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:01:39.603607 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:01:39.603619 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:01:39.603629 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:01:39.603640 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:01:39.603651 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:01:39.603661 | orchestrator | 2026-03-25 05:01:39.603672 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-25 05:01:39.603683 | orchestrator | Wednesday 25 March 2026 05:01:20 +0000 (0:00:02.252) 0:00:55.851 ******* 2026-03-25 05:01:39.603694 | orchestrator | changed: [testbed-node-3] 2026-03-25 05:01:39.603705 | orchestrator | changed: [testbed-node-4] 2026-03-25 05:01:39.603727 | orchestrator | changed: [testbed-node-5] 2026-03-25 05:01:39.603738 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:01:39.603749 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:01:39.603773 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:01:39.603784 | orchestrator | 2026-03-25 05:01:39.603794 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-25 05:01:39.603805 | orchestrator | Wednesday 25 March 2026 05:01:31 +0000 (0:00:11.192) 0:01:07.043 ******* 2026-03-25 05:01:39.603816 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-25 05:01:39.603828 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-25 05:01:39.603839 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-25 05:01:39.603850 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-25 05:01:39.603861 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-25 05:01:39.603887 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-25 05:01:47.543455 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-25 05:01:47.543558 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-25 05:01:47.543571 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-25 05:01:47.543582 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-25 05:01:47.543592 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-25 05:01:47.543602 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-25 05:01:47.543612 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 05:01:47.543621 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 05:01:47.543631 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 05:01:47.543640 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 05:01:47.543650 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 05:01:47.543659 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-25 05:01:47.543669 | orchestrator | 2026-03-25 05:01:47.543680 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-25 05:01:47.543691 | orchestrator | Wednesday 25 March 2026 05:01:39 +0000 (0:00:07.620) 0:01:14.664 ******* 2026-03-25 05:01:47.543701 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-25 05:01:47.543711 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:01:47.543722 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-25 05:01:47.543731 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:01:47.543741 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-25 05:01:47.543750 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:01:47.543760 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-03-25 05:01:47.543770 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-03-25 05:01:47.543779 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-03-25 05:01:47.543789 | orchestrator | 2026-03-25 05:01:47.543799 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-25 05:01:47.543809 | orchestrator | Wednesday 25 March 2026 05:01:42 +0000 (0:00:03.159) 0:01:17.823 ******* 2026-03-25 05:01:47.543842 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-25 05:01:47.543853 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:01:47.543876 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-25 05:01:47.543896 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:01:47.543905 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-25 05:01:47.543915 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:01:47.543925 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-25 05:01:47.543934 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-25 05:01:47.543944 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-25 05:01:47.543953 | orchestrator | 2026-03-25 05:01:47.543963 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 05:01:47.543976 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 05:01:47.543989 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 05:01:47.544013 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-25 05:01:47.544025 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 05:01:47.544037 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 05:01:47.544054 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-25 05:01:47.544070 | orchestrator | 2026-03-25 05:01:47.544087 | orchestrator | 2026-03-25 05:01:47.544105 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 05:01:47.544120 | orchestrator | Wednesday 25 March 2026 05:01:47 +0000 (0:00:04.293) 0:01:22.117 ******* 2026-03-25 05:01:47.544135 | orchestrator | =============================================================================== 2026-03-25 05:01:47.544151 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.72s 2026-03-25 05:01:47.544186 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.19s 2026-03-25 05:01:47.544205 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.62s 2026-03-25 05:01:47.544217 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.29s 2026-03-25 05:01:47.544227 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.87s 2026-03-25 05:01:47.544236 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.57s 2026-03-25 05:01:47.544246 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.57s 2026-03-25 05:01:47.544256 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.43s 2026-03-25 05:01:47.544265 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.16s 2026-03-25 05:01:47.544275 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.86s 2026-03-25 05:01:47.544285 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.81s 2026-03-25 05:01:47.544294 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.77s 2026-03-25 05:01:47.544304 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.75s 2026-03-25 05:01:47.544313 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.70s 2026-03-25 05:01:47.544323 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.66s 2026-03-25 05:01:47.544342 | orchestrator | module-load : Load modules ---------------------------------------------- 2.29s 2026-03-25 05:01:47.544352 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.26s 2026-03-25 05:01:47.544361 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.25s 2026-03-25 05:01:47.544371 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.15s 2026-03-25 05:01:47.544380 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.15s 2026-03-25 05:01:47.841824 | orchestrator | + osism apply -a upgrade ovn 2026-03-25 05:01:49.934427 | orchestrator | 2026-03-25 05:01:49 | INFO  | Task 3f431b18-9600-436f-8950-2b04c40cc8b3 (ovn) was prepared for execution. 2026-03-25 05:01:49.934532 | orchestrator | 2026-03-25 05:01:49 | INFO  | It takes a moment until task 3f431b18-9600-436f-8950-2b04c40cc8b3 (ovn) has been started and output is visible here. 2026-03-25 05:02:12.799007 | orchestrator | 2026-03-25 05:02:12.799081 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-25 05:02:12.799088 | orchestrator | 2026-03-25 05:02:12.799093 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-25 05:02:12.799097 | orchestrator | Wednesday 25 March 2026 05:01:56 +0000 (0:00:01.955) 0:00:01.955 ******* 2026-03-25 05:02:12.799101 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:02:12.799106 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:02:12.799110 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:02:12.799114 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:02:12.799118 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:02:12.799122 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:02:12.799126 | orchestrator | 2026-03-25 05:02:12.799130 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-25 05:02:12.799134 | orchestrator | Wednesday 25 March 2026 05:01:58 +0000 (0:00:02.606) 0:00:04.562 ******* 2026-03-25 05:02:12.799138 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-25 05:02:12.799142 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-25 05:02:12.799146 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-25 05:02:12.799150 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-25 05:02:12.799154 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-25 05:02:12.799158 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-25 05:02:12.799161 | orchestrator | 2026-03-25 05:02:12.799165 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-25 05:02:12.799169 | orchestrator | 2026-03-25 05:02:12.799173 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-25 05:02:12.799177 | orchestrator | Wednesday 25 March 2026 05:02:01 +0000 (0:00:03.052) 0:00:07.614 ******* 2026-03-25 05:02:12.799192 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 05:02:12.799197 | orchestrator | 2026-03-25 05:02:12.799201 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-25 05:02:12.799205 | orchestrator | Wednesday 25 March 2026 05:02:05 +0000 (0:00:03.489) 0:00:11.104 ******* 2026-03-25 05:02:12.799210 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799216 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799235 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799240 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799244 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799257 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799261 | orchestrator | 2026-03-25 05:02:12.799266 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-25 05:02:12.799269 | orchestrator | Wednesday 25 March 2026 05:02:07 +0000 (0:00:02.368) 0:00:13.473 ******* 2026-03-25 05:02:12.799273 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799277 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799284 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799288 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799295 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799299 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799303 | orchestrator | 2026-03-25 05:02:12.799307 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-25 05:02:12.799311 | orchestrator | Wednesday 25 March 2026 05:02:10 +0000 (0:00:02.683) 0:00:16.157 ******* 2026-03-25 05:02:12.799315 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799319 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:12.799325 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567353 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567533 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567577 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567625 | orchestrator | 2026-03-25 05:02:20.567643 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-25 05:02:20.567659 | orchestrator | Wednesday 25 March 2026 05:02:12 +0000 (0:00:02.332) 0:00:18.489 ******* 2026-03-25 05:02:20.567674 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567689 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567736 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567752 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567765 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567794 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567804 | orchestrator | 2026-03-25 05:02:20.567813 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-25 05:02:20.567822 | orchestrator | Wednesday 25 March 2026 05:02:15 +0000 (0:00:03.099) 0:00:21.589 ******* 2026-03-25 05:02:20.567832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:02:20.567914 | orchestrator | 2026-03-25 05:02:20.567925 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-25 05:02:20.567936 | orchestrator | Wednesday 25 March 2026 05:02:18 +0000 (0:00:02.594) 0:00:24.184 ******* 2026-03-25 05:02:20.567944 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 05:02:20.567955 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:02:20.567963 | orchestrator | } 2026-03-25 05:02:20.567972 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 05:02:20.567981 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:02:20.567989 | orchestrator | } 2026-03-25 05:02:20.567998 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 05:02:20.568006 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:02:20.568015 | orchestrator | } 2026-03-25 05:02:20.568023 | orchestrator | changed: [testbed-node-3] => { 2026-03-25 05:02:20.568032 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:02:20.568040 | orchestrator | } 2026-03-25 05:02:20.568049 | orchestrator | changed: [testbed-node-4] => { 2026-03-25 05:02:20.568057 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:02:20.568066 | orchestrator | } 2026-03-25 05:02:20.568074 | orchestrator | changed: [testbed-node-5] => { 2026-03-25 05:02:20.568083 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:02:20.568091 | orchestrator | } 2026-03-25 05:02:20.568100 | orchestrator | 2026-03-25 05:02:20.568109 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 05:02:20.568117 | orchestrator | Wednesday 25 March 2026 05:02:20 +0000 (0:00:01.933) 0:00:26.117 ******* 2026-03-25 05:02:20.568134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:02:49.775530 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:02:49.775652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:02:49.775674 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:02:49.775704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:02:49.775717 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:02:49.775728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:02:49.775740 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:02:49.775751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:02:49.775762 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:02:49.775773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:02:49.775784 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:02:49.775795 | orchestrator | 2026-03-25 05:02:49.775807 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-25 05:02:49.775820 | orchestrator | Wednesday 25 March 2026 05:02:23 +0000 (0:00:02.660) 0:00:28.778 ******* 2026-03-25 05:02:49.775831 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:02:49.775842 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:02:49.775853 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:02:49.775863 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:02:49.775874 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:02:49.775884 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:02:49.775895 | orchestrator | 2026-03-25 05:02:49.775906 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-25 05:02:49.775916 | orchestrator | Wednesday 25 March 2026 05:02:26 +0000 (0:00:03.704) 0:00:32.482 ******* 2026-03-25 05:02:49.775927 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-25 05:02:49.775939 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-25 05:02:49.775976 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-25 05:02:49.775988 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-25 05:02:49.775998 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-25 05:02:49.776009 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-25 05:02:49.776022 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 05:02:49.776035 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 05:02:49.776047 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 05:02:49.776059 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 05:02:49.776071 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 05:02:49.776100 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-25 05:02:49.776114 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-25 05:02:49.776129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-25 05:02:49.776142 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-25 05:02:49.776154 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-25 05:02:49.776173 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-25 05:02:49.776186 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-25 05:02:49.776199 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 05:02:49.776211 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 05:02:49.776223 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 05:02:49.776236 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 05:02:49.776248 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 05:02:49.776260 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-25 05:02:49.776272 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 05:02:49.776284 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 05:02:49.776296 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 05:02:49.776309 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 05:02:49.776322 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 05:02:49.776334 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-25 05:02:49.776347 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 05:02:49.776359 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 05:02:49.776381 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 05:02:49.776393 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 05:02:49.776403 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 05:02:49.776414 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-25 05:02:49.776424 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-25 05:02:49.776435 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-25 05:02:49.776446 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-25 05:02:49.776485 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-25 05:02:49.776505 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-25 05:02:49.776525 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-25 05:02:49.776544 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-25 05:02:49.776570 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-25 05:02:49.776581 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-25 05:02:49.776591 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-25 05:02:49.776602 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-25 05:02:49.776621 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-25 05:05:38.114244 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-25 05:05:38.114358 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-25 05:05:38.114373 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-25 05:05:38.114386 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-25 05:05:38.114397 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-25 05:05:38.114425 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-25 05:05:38.114437 | orchestrator | 2026-03-25 05:05:38.114449 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 05:05:38.114460 | orchestrator | Wednesday 25 March 2026 05:02:46 +0000 (0:00:19.769) 0:00:52.251 ******* 2026-03-25 05:05:38.114470 | orchestrator | 2026-03-25 05:05:38.114481 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 05:05:38.114492 | orchestrator | Wednesday 25 March 2026 05:02:47 +0000 (0:00:00.500) 0:00:52.751 ******* 2026-03-25 05:05:38.114502 | orchestrator | 2026-03-25 05:05:38.114513 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 05:05:38.114524 | orchestrator | Wednesday 25 March 2026 05:02:47 +0000 (0:00:00.462) 0:00:53.214 ******* 2026-03-25 05:05:38.114535 | orchestrator | 2026-03-25 05:05:38.114569 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 05:05:38.114580 | orchestrator | Wednesday 25 March 2026 05:02:47 +0000 (0:00:00.466) 0:00:53.680 ******* 2026-03-25 05:05:38.114591 | orchestrator | 2026-03-25 05:05:38.114602 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 05:05:38.114612 | orchestrator | Wednesday 25 March 2026 05:02:48 +0000 (0:00:00.478) 0:00:54.159 ******* 2026-03-25 05:05:38.114623 | orchestrator | 2026-03-25 05:05:38.114633 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-25 05:05:38.114644 | orchestrator | Wednesday 25 March 2026 05:02:48 +0000 (0:00:00.433) 0:00:54.593 ******* 2026-03-25 05:05:38.114654 | orchestrator | 2026-03-25 05:05:38.114665 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-25 05:05:38.114676 | orchestrator | Wednesday 25 March 2026 05:02:49 +0000 (0:00:00.828) 0:00:55.421 ******* 2026-03-25 05:05:38.114686 | orchestrator | 2026-03-25 05:05:38.114697 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-03-25 05:05:38.114708 | orchestrator | changed: [testbed-node-3] 2026-03-25 05:05:38.114721 | orchestrator | changed: [testbed-node-5] 2026-03-25 05:05:38.114731 | orchestrator | changed: [testbed-node-4] 2026-03-25 05:05:38.114742 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:05:38.114752 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:05:38.114763 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:05:38.114773 | orchestrator | 2026-03-25 05:05:38.114784 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-25 05:05:38.114795 | orchestrator | 2026-03-25 05:05:38.114806 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-25 05:05:38.114817 | orchestrator | Wednesday 25 March 2026 05:05:01 +0000 (0:02:11.800) 0:03:07.222 ******* 2026-03-25 05:05:38.114827 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 05:05:38.114838 | orchestrator | 2026-03-25 05:05:38.114849 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-25 05:05:38.114859 | orchestrator | Wednesday 25 March 2026 05:05:03 +0000 (0:00:01.990) 0:03:09.212 ******* 2026-03-25 05:05:38.114870 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-25 05:05:38.114881 | orchestrator | 2026-03-25 05:05:38.114891 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-25 05:05:38.114902 | orchestrator | Wednesday 25 March 2026 05:05:05 +0000 (0:00:01.991) 0:03:11.204 ******* 2026-03-25 05:05:38.114913 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.114924 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.114934 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.114945 | orchestrator | 2026-03-25 05:05:38.114955 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-25 05:05:38.114966 | orchestrator | Wednesday 25 March 2026 05:05:07 +0000 (0:00:01.859) 0:03:13.063 ******* 2026-03-25 05:05:38.114976 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.114987 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.114998 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115008 | orchestrator | 2026-03-25 05:05:38.115019 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-25 05:05:38.115030 | orchestrator | Wednesday 25 March 2026 05:05:08 +0000 (0:00:01.362) 0:03:14.425 ******* 2026-03-25 05:05:38.115040 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115051 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115061 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115072 | orchestrator | 2026-03-25 05:05:38.115083 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-25 05:05:38.115112 | orchestrator | Wednesday 25 March 2026 05:05:10 +0000 (0:00:01.424) 0:03:15.850 ******* 2026-03-25 05:05:38.115150 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115166 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115185 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115197 | orchestrator | 2026-03-25 05:05:38.115208 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-25 05:05:38.115219 | orchestrator | Wednesday 25 March 2026 05:05:11 +0000 (0:00:01.458) 0:03:17.308 ******* 2026-03-25 05:05:38.115230 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115258 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115270 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115281 | orchestrator | 2026-03-25 05:05:38.115292 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-25 05:05:38.115302 | orchestrator | Wednesday 25 March 2026 05:05:12 +0000 (0:00:01.300) 0:03:18.609 ******* 2026-03-25 05:05:38.115313 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:05:38.115324 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:05:38.115335 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:05:38.115346 | orchestrator | 2026-03-25 05:05:38.115356 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-25 05:05:38.115367 | orchestrator | Wednesday 25 March 2026 05:05:14 +0000 (0:00:01.367) 0:03:19.977 ******* 2026-03-25 05:05:38.115378 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115388 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115399 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115409 | orchestrator | 2026-03-25 05:05:38.115420 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-25 05:05:38.115437 | orchestrator | Wednesday 25 March 2026 05:05:16 +0000 (0:00:01.755) 0:03:21.733 ******* 2026-03-25 05:05:38.115448 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115459 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115469 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115480 | orchestrator | 2026-03-25 05:05:38.115490 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-25 05:05:38.115501 | orchestrator | Wednesday 25 March 2026 05:05:17 +0000 (0:00:01.662) 0:03:23.395 ******* 2026-03-25 05:05:38.115512 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115522 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115533 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115543 | orchestrator | 2026-03-25 05:05:38.115554 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-25 05:05:38.115565 | orchestrator | Wednesday 25 March 2026 05:05:19 +0000 (0:00:01.928) 0:03:25.324 ******* 2026-03-25 05:05:38.115575 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115586 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115597 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115607 | orchestrator | 2026-03-25 05:05:38.115618 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-25 05:05:38.115629 | orchestrator | Wednesday 25 March 2026 05:05:21 +0000 (0:00:01.392) 0:03:26.717 ******* 2026-03-25 05:05:38.115639 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:05:38.115650 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:05:38.115661 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:05:38.115671 | orchestrator | 2026-03-25 05:05:38.115682 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-25 05:05:38.115693 | orchestrator | Wednesday 25 March 2026 05:05:22 +0000 (0:00:01.372) 0:03:28.090 ******* 2026-03-25 05:05:38.115703 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:05:38.115714 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:05:38.115725 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:05:38.115735 | orchestrator | 2026-03-25 05:05:38.115746 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-25 05:05:38.115757 | orchestrator | Wednesday 25 March 2026 05:05:23 +0000 (0:00:01.354) 0:03:29.445 ******* 2026-03-25 05:05:38.115767 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115778 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115789 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115800 | orchestrator | 2026-03-25 05:05:38.115811 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-25 05:05:38.115828 | orchestrator | Wednesday 25 March 2026 05:05:25 +0000 (0:00:01.781) 0:03:31.226 ******* 2026-03-25 05:05:38.115839 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115849 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115860 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115870 | orchestrator | 2026-03-25 05:05:38.115881 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-25 05:05:38.115892 | orchestrator | Wednesday 25 March 2026 05:05:26 +0000 (0:00:01.407) 0:03:32.634 ******* 2026-03-25 05:05:38.115902 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115913 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.115923 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.115934 | orchestrator | 2026-03-25 05:05:38.115945 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-25 05:05:38.115955 | orchestrator | Wednesday 25 March 2026 05:05:29 +0000 (0:00:02.101) 0:03:34.736 ******* 2026-03-25 05:05:38.115966 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:05:38.115977 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:05:38.116064 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:05:38.116076 | orchestrator | 2026-03-25 05:05:38.116087 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-25 05:05:38.116098 | orchestrator | Wednesday 25 March 2026 05:05:30 +0000 (0:00:01.421) 0:03:36.157 ******* 2026-03-25 05:05:38.116109 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:05:38.116119 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:05:38.116148 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:05:38.116159 | orchestrator | 2026-03-25 05:05:38.116170 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-25 05:05:38.116181 | orchestrator | Wednesday 25 March 2026 05:05:31 +0000 (0:00:01.542) 0:03:37.700 ******* 2026-03-25 05:05:38.116192 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:05:38.116202 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:05:38.116213 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:05:38.116224 | orchestrator | 2026-03-25 05:05:38.116235 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-25 05:05:38.116245 | orchestrator | Wednesday 25 March 2026 05:05:33 +0000 (0:00:01.706) 0:03:39.406 ******* 2026-03-25 05:05:38.116269 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350649 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350763 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350802 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350815 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350827 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350838 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:05:44.350879 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:05:44.350910 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:05:44.350940 | orchestrator | 2026-03-25 05:05:44.350953 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-25 05:05:44.350965 | orchestrator | Wednesday 25 March 2026 05:05:38 +0000 (0:00:04.393) 0:03:43.800 ******* 2026-03-25 05:05:44.350977 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.350989 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.351000 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.351011 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:44.351030 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.197435 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.197569 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.197585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:05:59.197599 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.197610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:05:59.197621 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.197633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:05:59.197645 | orchestrator | 2026-03-25 05:05:59.197658 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-25 05:05:59.197670 | orchestrator | Wednesday 25 March 2026 05:05:44 +0000 (0:00:06.233) 0:03:50.034 ******* 2026-03-25 05:05:59.197682 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-25 05:05:59.197701 | orchestrator | 2026-03-25 05:05:59.197712 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-25 05:05:59.197723 | orchestrator | Wednesday 25 March 2026 05:05:46 +0000 (0:00:01.933) 0:03:51.968 ******* 2026-03-25 05:05:59.197734 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:05:59.197746 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:05:59.197777 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:05:59.197790 | orchestrator | 2026-03-25 05:05:59.197801 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-25 05:05:59.197811 | orchestrator | Wednesday 25 March 2026 05:05:48 +0000 (0:00:01.802) 0:03:53.771 ******* 2026-03-25 05:05:59.197822 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:05:59.197833 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:05:59.197844 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:05:59.197855 | orchestrator | 2026-03-25 05:05:59.197866 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-25 05:05:59.197876 | orchestrator | Wednesday 25 March 2026 05:05:50 +0000 (0:00:02.637) 0:03:56.408 ******* 2026-03-25 05:05:59.197887 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:05:59.197898 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:05:59.197909 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:05:59.197920 | orchestrator | 2026-03-25 05:05:59.197931 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-25 05:05:59.197942 | orchestrator | Wednesday 25 March 2026 05:05:53 +0000 (0:00:02.856) 0:03:59.264 ******* 2026-03-25 05:05:59.197957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.197972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.197987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.198000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.198013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.198176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:05:59.198211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:06:03.853718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.853812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:06:03.853824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.853832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:06:03.853840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.853866 | orchestrator | 2026-03-25 05:06:03.853875 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-25 05:06:03.853883 | orchestrator | Wednesday 25 March 2026 05:05:59 +0000 (0:00:05.611) 0:04:04.876 ******* 2026-03-25 05:06:03.853892 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 05:06:03.853900 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:06:03.853908 | orchestrator | } 2026-03-25 05:06:03.853915 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 05:06:03.853922 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:06:03.853929 | orchestrator | } 2026-03-25 05:06:03.853936 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 05:06:03.853943 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:06:03.853950 | orchestrator | } 2026-03-25 05:06:03.853957 | orchestrator | 2026-03-25 05:06:03.853965 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-25 05:06:03.853972 | orchestrator | Wednesday 25 March 2026 05:06:00 +0000 (0:00:01.492) 0:04:06.369 ******* 2026-03-25 05:06:03.853992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.854059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.854071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.854079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.854105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.854120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.854128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.854135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.854147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-25 05:06:03.854185 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-25 05:07:33.755627 | orchestrator | 2026-03-25 05:07:33.755717 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-25 05:07:33.755727 | orchestrator | Wednesday 25 March 2026 05:06:03 +0000 (0:00:03.175) 0:04:09.544 ******* 2026-03-25 05:07:33.755734 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-25 05:07:33.755740 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-25 05:07:33.755746 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-25 05:07:33.755752 | orchestrator | 2026-03-25 05:07:33.755758 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-25 05:07:33.755765 | orchestrator | Wednesday 25 March 2026 05:06:06 +0000 (0:00:02.293) 0:04:11.837 ******* 2026-03-25 05:07:33.755771 | orchestrator | changed: [testbed-node-0] => { 2026-03-25 05:07:33.755778 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:07:33.755784 | orchestrator | } 2026-03-25 05:07:33.755790 | orchestrator | changed: [testbed-node-1] => { 2026-03-25 05:07:33.755808 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:07:33.755814 | orchestrator | } 2026-03-25 05:07:33.755826 | orchestrator | changed: [testbed-node-2] => { 2026-03-25 05:07:33.755851 | orchestrator |  "msg": "Notifying handlers" 2026-03-25 05:07:33.755857 | orchestrator | } 2026-03-25 05:07:33.755863 | orchestrator | 2026-03-25 05:07:33.755869 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-25 05:07:33.755875 | orchestrator | Wednesday 25 March 2026 05:06:07 +0000 (0:00:01.362) 0:04:13.200 ******* 2026-03-25 05:07:33.755881 | orchestrator | 2026-03-25 05:07:33.755887 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-25 05:07:33.755893 | orchestrator | Wednesday 25 March 2026 05:06:07 +0000 (0:00:00.454) 0:04:13.654 ******* 2026-03-25 05:07:33.755899 | orchestrator | 2026-03-25 05:07:33.755904 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-25 05:07:33.755910 | orchestrator | Wednesday 25 March 2026 05:06:08 +0000 (0:00:00.471) 0:04:14.126 ******* 2026-03-25 05:07:33.755916 | orchestrator | 2026-03-25 05:07:33.755921 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-25 05:07:33.755927 | orchestrator | Wednesday 25 March 2026 05:06:09 +0000 (0:00:01.049) 0:04:15.176 ******* 2026-03-25 05:07:33.755933 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:07:33.755939 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:07:33.755945 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:07:33.755950 | orchestrator | 2026-03-25 05:07:33.755956 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-25 05:07:33.755962 | orchestrator | Wednesday 25 March 2026 05:06:26 +0000 (0:00:16.921) 0:04:32.098 ******* 2026-03-25 05:07:33.755967 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:07:33.755973 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:07:33.756008 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:07:33.756014 | orchestrator | 2026-03-25 05:07:33.756020 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-25 05:07:33.756026 | orchestrator | Wednesday 25 March 2026 05:06:43 +0000 (0:00:16.900) 0:04:48.998 ******* 2026-03-25 05:07:33.756032 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-25 05:07:33.756038 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-25 05:07:33.756044 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-25 05:07:33.756049 | orchestrator | 2026-03-25 05:07:33.756055 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-25 05:07:33.756061 | orchestrator | Wednesday 25 March 2026 05:06:55 +0000 (0:00:11.904) 0:05:00.902 ******* 2026-03-25 05:07:33.756066 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:07:33.756072 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:07:33.756078 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:07:33.756084 | orchestrator | 2026-03-25 05:07:33.756089 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-25 05:07:33.756095 | orchestrator | Wednesday 25 March 2026 05:07:12 +0000 (0:00:17.710) 0:05:18.613 ******* 2026-03-25 05:07:33.756101 | orchestrator | Pausing for 5 seconds 2026-03-25 05:07:33.756107 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:07:33.756113 | orchestrator | 2026-03-25 05:07:33.756119 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-25 05:07:33.756124 | orchestrator | Wednesday 25 March 2026 05:07:19 +0000 (0:00:06.271) 0:05:24.884 ******* 2026-03-25 05:07:33.756130 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:07:33.756136 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:07:33.756141 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:07:33.756147 | orchestrator | 2026-03-25 05:07:33.756153 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-25 05:07:33.756158 | orchestrator | Wednesday 25 March 2026 05:07:21 +0000 (0:00:01.842) 0:05:26.727 ******* 2026-03-25 05:07:33.756164 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:07:33.756170 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:07:33.756176 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:07:33.756181 | orchestrator | 2026-03-25 05:07:33.756200 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-25 05:07:33.756208 | orchestrator | Wednesday 25 March 2026 05:07:23 +0000 (0:00:02.006) 0:05:28.734 ******* 2026-03-25 05:07:33.756219 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:07:33.756226 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:07:33.756233 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:07:33.756239 | orchestrator | 2026-03-25 05:07:33.756246 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-25 05:07:33.756253 | orchestrator | Wednesday 25 March 2026 05:07:24 +0000 (0:00:01.887) 0:05:30.622 ******* 2026-03-25 05:07:33.756260 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:07:33.756267 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:07:33.756274 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:07:33.756281 | orchestrator | 2026-03-25 05:07:33.756287 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-25 05:07:33.756293 | orchestrator | Wednesday 25 March 2026 05:07:26 +0000 (0:00:01.685) 0:05:32.307 ******* 2026-03-25 05:07:33.756299 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:07:33.756305 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:07:33.756311 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:07:33.756316 | orchestrator | 2026-03-25 05:07:33.756322 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-25 05:07:33.756340 | orchestrator | Wednesday 25 March 2026 05:07:28 +0000 (0:00:01.773) 0:05:34.081 ******* 2026-03-25 05:07:33.756346 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:07:33.756352 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:07:33.756357 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:07:33.756363 | orchestrator | 2026-03-25 05:07:33.756369 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-25 05:07:33.756375 | orchestrator | Wednesday 25 March 2026 05:07:30 +0000 (0:00:01.937) 0:05:36.018 ******* 2026-03-25 05:07:33.756381 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-25 05:07:33.756386 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-25 05:07:33.756392 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-25 05:07:33.756398 | orchestrator | 2026-03-25 05:07:33.756404 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 05:07:33.756410 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 05:07:33.756417 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-25 05:07:33.756423 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-25 05:07:33.756428 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 05:07:33.756434 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 05:07:33.756440 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 05:07:33.756446 | orchestrator | 2026-03-25 05:07:33.756451 | orchestrator | 2026-03-25 05:07:33.756457 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 05:07:33.756463 | orchestrator | Wednesday 25 March 2026 05:07:33 +0000 (0:00:03.041) 0:05:39.060 ******* 2026-03-25 05:07:33.756469 | orchestrator | =============================================================================== 2026-03-25 05:07:33.756474 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.80s 2026-03-25 05:07:33.756480 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.77s 2026-03-25 05:07:33.756486 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.71s 2026-03-25 05:07:33.756492 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.92s 2026-03-25 05:07:33.756502 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.90s 2026-03-25 05:07:33.756508 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 11.90s 2026-03-25 05:07:33.756513 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.27s 2026-03-25 05:07:33.756519 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.23s 2026-03-25 05:07:33.756525 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.61s 2026-03-25 05:07:33.756530 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.39s 2026-03-25 05:07:33.756536 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.70s 2026-03-25 05:07:33.756542 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.49s 2026-03-25 05:07:33.756548 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.18s 2026-03-25 05:07:33.756553 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.17s 2026-03-25 05:07:33.756559 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.10s 2026-03-25 05:07:33.756565 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.05s 2026-03-25 05:07:33.756571 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.04s 2026-03-25 05:07:33.756576 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.86s 2026-03-25 05:07:33.756585 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.68s 2026-03-25 05:07:33.756591 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.66s 2026-03-25 05:07:34.068369 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-25 05:07:34.068464 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-25 05:07:34.068481 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-03-25 05:07:34.075110 | orchestrator | + set -e 2026-03-25 05:07:34.075219 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 05:07:34.075236 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 05:07:34.075249 | orchestrator | ++ INTERACTIVE=false 2026-03-25 05:07:34.075260 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 05:07:34.075270 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 05:07:34.075282 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-03-25 05:07:36.213156 | orchestrator | 2026-03-25 05:07:36 | INFO  | Task 4c60bb8d-9997-422e-9f5a-9ff63645196f (ceph-rolling_update) was prepared for execution. 2026-03-25 05:07:36.213256 | orchestrator | 2026-03-25 05:07:36 | INFO  | It takes a moment until task 4c60bb8d-9997-422e-9f5a-9ff63645196f (ceph-rolling_update) has been started and output is visible here. 2026-03-25 05:09:03.066529 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-25 05:09:03.066683 | orchestrator | 2.16.14 2026-03-25 05:09:03.066704 | orchestrator | 2026-03-25 05:09:03.066717 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-03-25 05:09:03.066730 | orchestrator | 2026-03-25 05:09:03.066741 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-03-25 05:09:03.066753 | orchestrator | Wednesday 25 March 2026 05:07:44 +0000 (0:00:01.904) 0:00:01.904 ******* 2026-03-25 05:09:03.066763 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-03-25 05:09:03.066775 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-03-25 05:09:03.066786 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-03-25 05:09:03.066797 | orchestrator | skipping: [localhost] 2026-03-25 05:09:03.066809 | orchestrator | 2026-03-25 05:09:03.066820 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-03-25 05:09:03.066831 | orchestrator | 2026-03-25 05:09:03.066842 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-03-25 05:09:03.066853 | orchestrator | Wednesday 25 March 2026 05:07:46 +0000 (0:00:01.871) 0:00:03.776 ******* 2026-03-25 05:09:03.066888 | orchestrator | ok: [testbed-node-0] => { 2026-03-25 05:09:03.066931 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-25 05:09:03.066946 | orchestrator | } 2026-03-25 05:09:03.066957 | orchestrator | ok: [testbed-node-1] => { 2026-03-25 05:09:03.066968 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-25 05:09:03.066979 | orchestrator | } 2026-03-25 05:09:03.066990 | orchestrator | ok: [testbed-node-2] => { 2026-03-25 05:09:03.067001 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-25 05:09:03.067011 | orchestrator | } 2026-03-25 05:09:03.067022 | orchestrator | ok: [testbed-node-3] => { 2026-03-25 05:09:03.067033 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-25 05:09:03.067046 | orchestrator | } 2026-03-25 05:09:03.067060 | orchestrator | ok: [testbed-node-4] => { 2026-03-25 05:09:03.067072 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-25 05:09:03.067086 | orchestrator | } 2026-03-25 05:09:03.067099 | orchestrator | ok: [testbed-node-5] => { 2026-03-25 05:09:03.067111 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-25 05:09:03.067124 | orchestrator | } 2026-03-25 05:09:03.067136 | orchestrator | ok: [testbed-manager] => { 2026-03-25 05:09:03.067149 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-25 05:09:03.067162 | orchestrator | } 2026-03-25 05:09:03.067175 | orchestrator | 2026-03-25 05:09:03.067189 | orchestrator | TASK [Gather facts] ************************************************************ 2026-03-25 05:09:03.067202 | orchestrator | Wednesday 25 March 2026 05:07:52 +0000 (0:00:05.640) 0:00:09.416 ******* 2026-03-25 05:09:03.067214 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:03.067234 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:03.067264 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:03.067285 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:03.067305 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:03.067324 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:03.067341 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.067361 | orchestrator | 2026-03-25 05:09:03.067380 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-03-25 05:09:03.067400 | orchestrator | Wednesday 25 March 2026 05:08:00 +0000 (0:00:08.056) 0:00:17.473 ******* 2026-03-25 05:09:03.067418 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:09:03.067437 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:09:03.067455 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:09:03.067473 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:09:03.067491 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:09:03.067512 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:09:03.067533 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:09:03.067552 | orchestrator | 2026-03-25 05:09:03.067572 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-03-25 05:09:03.067591 | orchestrator | Wednesday 25 March 2026 05:08:31 +0000 (0:00:30.991) 0:00:48.465 ******* 2026-03-25 05:09:03.067611 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:03.067626 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:03.067637 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:03.067647 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:03.067658 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:03.067669 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:03.067680 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.067691 | orchestrator | 2026-03-25 05:09:03.067702 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:09:03.067726 | orchestrator | Wednesday 25 March 2026 05:08:33 +0000 (0:00:02.139) 0:00:50.605 ******* 2026-03-25 05:09:03.067738 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-25 05:09:03.067750 | orchestrator | 2026-03-25 05:09:03.067761 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:09:03.067772 | orchestrator | Wednesday 25 March 2026 05:08:36 +0000 (0:00:02.767) 0:00:53.373 ******* 2026-03-25 05:09:03.067783 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:03.067794 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:03.067805 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:03.067815 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:03.067826 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:03.067837 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:03.067848 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.067858 | orchestrator | 2026-03-25 05:09:03.067891 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:09:03.067939 | orchestrator | Wednesday 25 March 2026 05:08:38 +0000 (0:00:02.511) 0:00:55.885 ******* 2026-03-25 05:09:03.067951 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:03.067961 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:03.067972 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:03.067982 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:03.067993 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:03.068004 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:03.068014 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.068025 | orchestrator | 2026-03-25 05:09:03.068036 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:09:03.068047 | orchestrator | Wednesday 25 March 2026 05:08:40 +0000 (0:00:02.022) 0:00:57.907 ******* 2026-03-25 05:09:03.068057 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:03.068157 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:03.068179 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:03.068189 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:03.068200 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:03.068211 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:03.068222 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.068233 | orchestrator | 2026-03-25 05:09:03.068244 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:09:03.068255 | orchestrator | Wednesday 25 March 2026 05:08:43 +0000 (0:00:02.680) 0:01:00.587 ******* 2026-03-25 05:09:03.068266 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:03.068277 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:03.068288 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:03.068298 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:03.068309 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:03.068320 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:03.068330 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.068341 | orchestrator | 2026-03-25 05:09:03.068352 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:09:03.068363 | orchestrator | Wednesday 25 March 2026 05:08:45 +0000 (0:00:02.072) 0:01:02.660 ******* 2026-03-25 05:09:03.068374 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:03.068384 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:03.068395 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:03.068406 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:03.068416 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:03.068427 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:03.068438 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.068449 | orchestrator | 2026-03-25 05:09:03.068459 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:09:03.068486 | orchestrator | Wednesday 25 March 2026 05:08:47 +0000 (0:00:02.153) 0:01:04.813 ******* 2026-03-25 05:09:03.068496 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:03.068516 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:03.068527 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:03.068538 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:03.068549 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:03.068559 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:03.068570 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.068581 | orchestrator | 2026-03-25 05:09:03.068592 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:09:03.068604 | orchestrator | Wednesday 25 March 2026 05:08:49 +0000 (0:00:01.942) 0:01:06.755 ******* 2026-03-25 05:09:03.068614 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:03.068625 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:03.068636 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:03.068647 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:03.068658 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:03.068669 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:03.068679 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:03.068690 | orchestrator | 2026-03-25 05:09:03.068701 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:09:03.068712 | orchestrator | Wednesday 25 March 2026 05:08:51 +0000 (0:00:02.228) 0:01:08.984 ******* 2026-03-25 05:09:03.068723 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:03.068733 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:03.068744 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:03.068755 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:03.068771 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:03.068790 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:03.068808 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.068826 | orchestrator | 2026-03-25 05:09:03.068844 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:09:03.068863 | orchestrator | Wednesday 25 March 2026 05:08:54 +0000 (0:00:02.174) 0:01:11.159 ******* 2026-03-25 05:09:03.068880 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:09:03.068899 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:09:03.068997 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:09:03.069017 | orchestrator | 2026-03-25 05:09:03.069030 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:09:03.069048 | orchestrator | Wednesday 25 March 2026 05:08:55 +0000 (0:00:01.741) 0:01:12.900 ******* 2026-03-25 05:09:03.069059 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:03.069070 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:03.069081 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:03.069092 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:03.069103 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:03.069113 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:03.069124 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:03.069135 | orchestrator | 2026-03-25 05:09:03.069145 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:09:03.069156 | orchestrator | Wednesday 25 March 2026 05:08:58 +0000 (0:00:02.459) 0:01:15.360 ******* 2026-03-25 05:09:03.069167 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:09:03.069178 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:09:03.069189 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:09:03.069199 | orchestrator | 2026-03-25 05:09:03.069210 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:09:03.069221 | orchestrator | Wednesday 25 March 2026 05:09:01 +0000 (0:00:03.316) 0:01:18.677 ******* 2026-03-25 05:09:03.069244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:09:25.478276 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:09:25.478389 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:09:25.478431 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:25.478444 | orchestrator | 2026-03-25 05:09:25.478456 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:09:25.478468 | orchestrator | Wednesday 25 March 2026 05:09:03 +0000 (0:00:01.393) 0:01:20.071 ******* 2026-03-25 05:09:25.478497 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:09:25.478512 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:09:25.478524 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:09:25.478535 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:25.478546 | orchestrator | 2026-03-25 05:09:25.478557 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:09:25.478568 | orchestrator | Wednesday 25 March 2026 05:09:04 +0000 (0:00:01.885) 0:01:21.956 ******* 2026-03-25 05:09:25.478581 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:25.478597 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:25.478608 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:25.478619 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:25.478630 | orchestrator | 2026-03-25 05:09:25.478641 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:09:25.478652 | orchestrator | Wednesday 25 March 2026 05:09:06 +0000 (0:00:01.200) 0:01:23.157 ******* 2026-03-25 05:09:25.478680 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '928ffe0e6efa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:08:59.014243', 'end': '2026-03-25 05:08:59.052527', 'delta': '0:00:00.038284', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['928ffe0e6efa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:09:25.478715 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cb4e3d9a68a8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:08:59.891208', 'end': '2026-03-25 05:08:59.937053', 'delta': '0:00:00.045845', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cb4e3d9a68a8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:09:25.478737 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '90e526f29e10', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:09:00.435102', 'end': '2026-03-25 05:09:00.485439', 'delta': '0:00:00.050337', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90e526f29e10'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:09:25.478749 | orchestrator | 2026-03-25 05:09:25.478760 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:09:25.478771 | orchestrator | Wednesday 25 March 2026 05:09:07 +0000 (0:00:01.229) 0:01:24.386 ******* 2026-03-25 05:09:25.478782 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:25.478794 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:25.478804 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:25.478817 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:25.478830 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:25.478842 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:25.478855 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:25.478867 | orchestrator | 2026-03-25 05:09:25.478879 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:09:25.478915 | orchestrator | Wednesday 25 March 2026 05:09:09 +0000 (0:00:02.116) 0:01:26.503 ******* 2026-03-25 05:09:25.478928 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:25.478940 | orchestrator | 2026-03-25 05:09:25.478953 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:09:25.478965 | orchestrator | Wednesday 25 March 2026 05:09:10 +0000 (0:00:01.245) 0:01:27.749 ******* 2026-03-25 05:09:25.478978 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:25.478990 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:25.479003 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:25.479015 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:25.479027 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:25.479039 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:25.479052 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:25.479064 | orchestrator | 2026-03-25 05:09:25.479076 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:09:25.479088 | orchestrator | Wednesday 25 March 2026 05:09:12 +0000 (0:00:02.160) 0:01:29.910 ******* 2026-03-25 05:09:25.479100 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:25.479113 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:09:25.479126 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:09:25.479139 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:09:25.479152 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:09:25.479164 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:09:25.479175 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-25 05:09:25.479186 | orchestrator | 2026-03-25 05:09:25.479196 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:09:25.479214 | orchestrator | Wednesday 25 March 2026 05:09:16 +0000 (0:00:03.419) 0:01:33.329 ******* 2026-03-25 05:09:25.479226 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:25.479236 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:25.479247 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:25.479258 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:25.479269 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:25.479279 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:25.479290 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:25.479301 | orchestrator | 2026-03-25 05:09:25.479312 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:09:25.479323 | orchestrator | Wednesday 25 March 2026 05:09:18 +0000 (0:00:02.142) 0:01:35.472 ******* 2026-03-25 05:09:25.479333 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:25.479344 | orchestrator | 2026-03-25 05:09:25.479355 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:09:25.479371 | orchestrator | Wednesday 25 March 2026 05:09:19 +0000 (0:00:01.133) 0:01:36.605 ******* 2026-03-25 05:09:25.479383 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:25.479394 | orchestrator | 2026-03-25 05:09:25.479404 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:09:25.479415 | orchestrator | Wednesday 25 March 2026 05:09:20 +0000 (0:00:01.277) 0:01:37.883 ******* 2026-03-25 05:09:25.479426 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:25.479437 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:25.479447 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:25.479458 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:25.479469 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:25.479479 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:25.479490 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:25.479501 | orchestrator | 2026-03-25 05:09:25.479511 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:09:25.479522 | orchestrator | Wednesday 25 March 2026 05:09:23 +0000 (0:00:02.485) 0:01:40.369 ******* 2026-03-25 05:09:25.479533 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:25.479544 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:25.479554 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:25.479565 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:25.479576 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:25.479587 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:25.479604 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:36.174234 | orchestrator | 2026-03-25 05:09:36.174308 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:09:36.174315 | orchestrator | Wednesday 25 March 2026 05:09:25 +0000 (0:00:02.111) 0:01:42.481 ******* 2026-03-25 05:09:36.174320 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:36.174325 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:36.174329 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:36.174333 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:36.174337 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:36.174341 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:36.174345 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:36.174349 | orchestrator | 2026-03-25 05:09:36.174353 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:09:36.174357 | orchestrator | Wednesday 25 March 2026 05:09:27 +0000 (0:00:02.121) 0:01:44.603 ******* 2026-03-25 05:09:36.174360 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:36.174364 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:36.174368 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:36.174372 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:36.174376 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:36.174379 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:36.174383 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:36.174387 | orchestrator | 2026-03-25 05:09:36.174404 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:09:36.174408 | orchestrator | Wednesday 25 March 2026 05:09:29 +0000 (0:00:01.938) 0:01:46.541 ******* 2026-03-25 05:09:36.174411 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:36.174415 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:36.174419 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:36.174422 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:36.174426 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:36.174430 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:36.174434 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:36.174437 | orchestrator | 2026-03-25 05:09:36.174441 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:09:36.174445 | orchestrator | Wednesday 25 March 2026 05:09:31 +0000 (0:00:02.153) 0:01:48.695 ******* 2026-03-25 05:09:36.174449 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:36.174452 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:36.174456 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:36.174460 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:36.174463 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:36.174467 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:36.174471 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:36.174474 | orchestrator | 2026-03-25 05:09:36.174478 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:09:36.174483 | orchestrator | Wednesday 25 March 2026 05:09:33 +0000 (0:00:01.896) 0:01:50.592 ******* 2026-03-25 05:09:36.174486 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:36.174490 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:36.174494 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:36.174497 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:36.174501 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:36.174505 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:36.174509 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:36.174513 | orchestrator | 2026-03-25 05:09:36.174516 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:09:36.174520 | orchestrator | Wednesday 25 March 2026 05:09:35 +0000 (0:00:02.274) 0:01:52.866 ******* 2026-03-25 05:09:36.174526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.174532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.174546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.174562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:09:36.174574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.174578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.174582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.174591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '225bc811', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:36.174596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.174608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.346742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.346875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.346987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.347005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:09:36.347020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.347032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.347065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.347107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2a85f599', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:36.347151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.347163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.347175 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:36.347189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.347200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.347218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.347238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:09:36.347261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.522383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.522532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.522588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46c5fc1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:36.522648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.522667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.522684 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:36.522729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.522749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'uuids': ['a582f89c-a8ac-4a87-8a0b-f7c0ca2abef4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8']}})  2026-03-25 05:09:36.522769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99e65ea9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:36.522788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f']}})  2026-03-25 05:09:36.522814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.522844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.522863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-42-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:09:36.522915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.863467 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:36.863603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo', 'dm-uuid-CRYPT-LUKS2-10d41a0c964d43008e142cbf5f4d58c4-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:09:36.863623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.863640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'uuids': ['10d41a0c-964d-4300-8e14-2cbf5f4d58c4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo']}})  2026-03-25 05:09:36.863654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e']}})  2026-03-25 05:09:36.863715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.863786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5418d243', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:36.863803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.863815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.863827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.863854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'uuids': ['1a1bfadf-e219-47e2-8705-0963963507ec'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq']}})  2026-03-25 05:09:36.863866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e1f7d9f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:36.863928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f']}})  2026-03-25 05:09:36.922523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8', 'dm-uuid-CRYPT-LUKS2-a582f89ca8ac4a878a0bf7c0ca2abef4-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:09:36.922652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.922671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.922684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:09:36.922744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.922757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp', 'dm-uuid-CRYPT-LUKS2-d0a28742b6dc46aab152442a6244f51b-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:09:36.922769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.922802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'uuids': ['d0a28742-b6dc-46aa-b152-442a6244f51b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp']}})  2026-03-25 05:09:36.922816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138']}})  2026-03-25 05:09:36.922828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.922859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cb51c54', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:36.922874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:36.922925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'uuids': ['e67f6cc7-d6f8-4138-9e65-f811c858cad0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI']}})  2026-03-25 05:09:37.041181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq', 'dm-uuid-CRYPT-LUKS2-1a1bfadfe21947e287050963963507ec-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82545a3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:37.041223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060']}})  2026-03-25 05:09:37.041232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:09:37.041283 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:37.041292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X', 'dm-uuid-CRYPT-LUKS2-306c9f3fcb174ac6ad8e271da2bf30e2-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041336 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:37.041350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'uuids': ['306c9f3f-cb17-4ac6-ad8e-271da2bf30e2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X']}})  2026-03-25 05:09:37.041361 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041370 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:37.041396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269']}})  2026-03-25 05:09:38.337950 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:09:38.338155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:38.338200 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:38.338213 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:38.338229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0ceb4511', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:38.338297 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:38.338311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:38.338322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:38.338339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI', 'dm-uuid-CRYPT-LUKS2-e67f6cc7d6f841389e65f811c858cad0-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:09:38.338352 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64e9f395', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part16', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part14', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part15', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part1', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:09:38.338373 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:38.338395 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:38.615007 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:09:38.615117 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:38.615128 | orchestrator | 2026-03-25 05:09:38.615136 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:09:38.615143 | orchestrator | Wednesday 25 March 2026 05:09:38 +0000 (0:00:02.472) 0:01:55.338 ******* 2026-03-25 05:09:38.615172 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.615183 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.615190 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.615198 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.615224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.615249 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.615255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.615269 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '225bc811', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.615287 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.615305 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836378 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836576 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836604 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836618 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836673 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836686 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836722 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836747 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2a85f599', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836772 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836784 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.836796 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:38.836818 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984541 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984667 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984683 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984720 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984732 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984744 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984794 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46c5fc1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984818 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984831 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:38.984846 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:38.984915 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'uuids': ['a582f89c-a8ac-4a87-8a0b-f7c0ca2abef4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.198799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99e65ea9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199000 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199020 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:39.199034 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199047 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199076 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-42-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo', 'dm-uuid-CRYPT-LUKS2-10d41a0c964d43008e142cbf5f4d58c4-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199148 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'uuids': ['10d41a0c-964d-4300-8e14-2cbf5f4d58c4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199165 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.199183 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.299048 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.299216 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5418d243', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.299278 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'uuids': ['1a1bfadf-e219-47e2-8705-0963963507ec'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.299320 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.299360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.299373 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e1f7d9f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.299386 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8', 'dm-uuid-CRYPT-LUKS2-a582f89ca8ac4a878a0bf7c0ca2abef4-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.299405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.299427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'uuids': ['e67f6cc7-d6f8-4138-9e65-f811c858cad0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404713 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82545a3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404742 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404776 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404849 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404875 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404929 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp', 'dm-uuid-CRYPT-LUKS2-d0a28742b6dc46aab152442a6244f51b-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.404976 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.405014 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'uuids': ['d0a28742-b6dc-46aa-b152-442a6244f51b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504556 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cb51c54', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504817 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504829 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X', 'dm-uuid-CRYPT-LUKS2-306c9f3fcb174ac6ad8e271da2bf30e2-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504854 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504865 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.504916 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq', 'dm-uuid-CRYPT-LUKS2-1a1bfadfe21947e287050963963507ec-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634414 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'uuids': ['306c9f3f-cb17-4ac6-ad8e-271da2bf30e2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634596 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:39.634629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634668 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0ceb4511', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634682 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:39.634694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634731 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI', 'dm-uuid-CRYPT-LUKS2-e67f6cc7d6f841389e65f811c858cad0-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634743 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:39.634755 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634768 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:39.634786 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:52.338520 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:52.338661 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:52.338743 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:52.338767 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:52.338820 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64e9f395', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part16', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part14', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part15', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part1', 'scsi-SQEMU_QEMU_HARDDISK_64e9f395-f2d8-41f9-9a3f-57dc675ebeec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:52.338857 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:52.338941 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:09:52.338958 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:52.338974 | orchestrator | 2026-03-25 05:09:52.338988 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:09:52.339002 | orchestrator | Wednesday 25 March 2026 05:09:40 +0000 (0:00:02.554) 0:01:57.892 ******* 2026-03-25 05:09:52.339014 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:52.339028 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:52.339041 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:52.339053 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:52.339065 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:52.339077 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:52.339088 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:52.339101 | orchestrator | 2026-03-25 05:09:52.339113 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:09:52.339126 | orchestrator | Wednesday 25 March 2026 05:09:43 +0000 (0:00:02.502) 0:02:00.395 ******* 2026-03-25 05:09:52.339138 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:52.339151 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:52.339162 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:52.339174 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:52.339186 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:52.339198 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:52.339210 | orchestrator | ok: [testbed-manager] 2026-03-25 05:09:52.339222 | orchestrator | 2026-03-25 05:09:52.339235 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:09:52.339247 | orchestrator | Wednesday 25 March 2026 05:09:45 +0000 (0:00:01.932) 0:02:02.328 ******* 2026-03-25 05:09:52.339259 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:09:52.339270 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:09:52.339283 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:09:52.339330 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:09:52.339342 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:52.339352 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:09:52.339363 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:09:52.339374 | orchestrator | 2026-03-25 05:09:52.339384 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:09:52.339395 | orchestrator | Wednesday 25 March 2026 05:09:47 +0000 (0:00:02.495) 0:02:04.824 ******* 2026-03-25 05:09:52.339406 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:52.339416 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:52.339427 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:52.339438 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:52.339448 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:09:52.339459 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:09:52.339478 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:09:52.339489 | orchestrator | 2026-03-25 05:09:52.339499 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:09:52.339510 | orchestrator | Wednesday 25 March 2026 05:09:49 +0000 (0:00:01.904) 0:02:06.728 ******* 2026-03-25 05:09:52.339521 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:09:52.339531 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:09:52.339545 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:09:52.339564 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:09:52.339593 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:10:21.080740 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:10:21.080847 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-03-25 05:10:21.080888 | orchestrator | 2026-03-25 05:10:21.080899 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:10:21.080909 | orchestrator | Wednesday 25 March 2026 05:09:52 +0000 (0:00:02.604) 0:02:09.333 ******* 2026-03-25 05:10:21.080918 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:10:21.080927 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:10:21.080936 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:10:21.080945 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:10:21.080953 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:10:21.080962 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:10:21.080971 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:10:21.080979 | orchestrator | 2026-03-25 05:10:21.080988 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:10:21.080997 | orchestrator | Wednesday 25 March 2026 05:09:54 +0000 (0:00:01.959) 0:02:11.292 ******* 2026-03-25 05:10:21.081006 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:10:21.081015 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-25 05:10:21.081024 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-25 05:10:21.081032 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-25 05:10:21.081041 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:10:21.081049 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-25 05:10:21.081058 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-25 05:10:21.081066 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-25 05:10:21.081075 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-25 05:10:21.081083 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-25 05:10:21.081092 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-25 05:10:21.081100 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:10:21.081109 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-25 05:10:21.081117 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-25 05:10:21.081126 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-25 05:10:21.081150 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-25 05:10:21.081159 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-25 05:10:21.081168 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-25 05:10:21.081177 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-25 05:10:21.081186 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-25 05:10:21.081194 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-25 05:10:21.081203 | orchestrator | 2026-03-25 05:10:21.081212 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:10:21.081221 | orchestrator | Wednesday 25 March 2026 05:09:57 +0000 (0:00:03.337) 0:02:14.630 ******* 2026-03-25 05:10:21.081229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:10:21.081238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:10:21.081247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:10:21.081277 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:10:21.081288 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 05:10:21.081297 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 05:10:21.081307 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 05:10:21.081317 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:10:21.081327 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 05:10:21.081340 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 05:10:21.081354 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 05:10:21.081369 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:10:21.081382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 05:10:21.081396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 05:10:21.081410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 05:10:21.081424 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:10:21.081439 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 05:10:21.081453 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 05:10:21.081467 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 05:10:21.081482 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:10:21.081496 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-25 05:10:21.081510 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-25 05:10:21.081525 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-25 05:10:21.081540 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:10:21.081550 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-25 05:10:21.081561 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-25 05:10:21.081570 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-25 05:10:21.081580 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:10:21.081590 | orchestrator | 2026-03-25 05:10:21.081601 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:10:21.081610 | orchestrator | Wednesday 25 March 2026 05:09:59 +0000 (0:00:02.267) 0:02:16.897 ******* 2026-03-25 05:10:21.081621 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:10:21.081631 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:10:21.081641 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:10:21.081650 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:10:21.081675 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 05:10:21.081685 | orchestrator | 2026-03-25 05:10:21.081693 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:10:21.081703 | orchestrator | Wednesday 25 March 2026 05:10:02 +0000 (0:00:02.228) 0:02:19.126 ******* 2026-03-25 05:10:21.081712 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:10:21.081721 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:10:21.081729 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:10:21.081738 | orchestrator | 2026-03-25 05:10:21.081746 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:10:21.081755 | orchestrator | Wednesday 25 March 2026 05:10:03 +0000 (0:00:01.699) 0:02:20.825 ******* 2026-03-25 05:10:21.081763 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:10:21.081772 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:10:21.081781 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:10:21.081789 | orchestrator | 2026-03-25 05:10:21.081798 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:10:21.081806 | orchestrator | Wednesday 25 March 2026 05:10:05 +0000 (0:00:01.371) 0:02:22.197 ******* 2026-03-25 05:10:21.081815 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:10:21.081832 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:10:21.081841 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:10:21.081850 | orchestrator | 2026-03-25 05:10:21.081900 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:10:21.081909 | orchestrator | Wednesday 25 March 2026 05:10:06 +0000 (0:00:01.437) 0:02:23.634 ******* 2026-03-25 05:10:21.081918 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:10:21.081926 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:10:21.081935 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:10:21.081944 | orchestrator | 2026-03-25 05:10:21.081952 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:10:21.081961 | orchestrator | Wednesday 25 March 2026 05:10:08 +0000 (0:00:01.499) 0:02:25.133 ******* 2026-03-25 05:10:21.081970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 05:10:21.081979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 05:10:21.081987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 05:10:21.081996 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:10:21.082004 | orchestrator | 2026-03-25 05:10:21.082074 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:10:21.082086 | orchestrator | Wednesday 25 March 2026 05:10:09 +0000 (0:00:01.620) 0:02:26.754 ******* 2026-03-25 05:10:21.082095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 05:10:21.082104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 05:10:21.082112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 05:10:21.082121 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:10:21.082130 | orchestrator | 2026-03-25 05:10:21.082138 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:10:21.082147 | orchestrator | Wednesday 25 March 2026 05:10:11 +0000 (0:00:01.529) 0:02:28.284 ******* 2026-03-25 05:10:21.082161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 05:10:21.082209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 05:10:21.082224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 05:10:21.082240 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:10:21.082257 | orchestrator | 2026-03-25 05:10:21.082268 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:10:21.082277 | orchestrator | Wednesday 25 March 2026 05:10:12 +0000 (0:00:01.617) 0:02:29.902 ******* 2026-03-25 05:10:21.082285 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:10:21.082294 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:10:21.082302 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:10:21.082311 | orchestrator | 2026-03-25 05:10:21.082319 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:10:21.082328 | orchestrator | Wednesday 25 March 2026 05:10:14 +0000 (0:00:01.379) 0:02:31.281 ******* 2026-03-25 05:10:21.082336 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 05:10:21.082346 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 05:10:21.082360 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 05:10:21.082374 | orchestrator | 2026-03-25 05:10:21.082388 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:10:21.082401 | orchestrator | Wednesday 25 March 2026 05:10:15 +0000 (0:00:01.627) 0:02:32.909 ******* 2026-03-25 05:10:21.082415 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:10:21.082430 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:10:21.082447 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:10:21.082456 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:10:21.082465 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:10:21.082491 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:10:21.082506 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:10:21.082521 | orchestrator | 2026-03-25 05:10:21.082535 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:10:21.082551 | orchestrator | Wednesday 25 March 2026 05:10:17 +0000 (0:00:02.042) 0:02:34.952 ******* 2026-03-25 05:10:21.082565 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:10:21.082577 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:10:21.082588 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:10:21.082614 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:11:12.029993 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:11:12.030158 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:11:12.030175 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:11:12.030187 | orchestrator | 2026-03-25 05:11:12.030198 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-03-25 05:11:12.030209 | orchestrator | Wednesday 25 March 2026 05:10:21 +0000 (0:00:03.119) 0:02:38.071 ******* 2026-03-25 05:11:12.030219 | orchestrator | changed: [testbed-node-3] 2026-03-25 05:11:12.030231 | orchestrator | changed: [testbed-node-4] 2026-03-25 05:11:12.030241 | orchestrator | changed: [testbed-node-5] 2026-03-25 05:11:12.030251 | orchestrator | changed: [testbed-manager] 2026-03-25 05:11:12.030260 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:11:12.030270 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:11:12.030303 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:11:12.030313 | orchestrator | 2026-03-25 05:11:12.030323 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-03-25 05:11:12.030334 | orchestrator | Wednesday 25 March 2026 05:10:32 +0000 (0:00:11.094) 0:02:49.165 ******* 2026-03-25 05:11:12.030343 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.030353 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.030363 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.030373 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.030383 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.030393 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.030403 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.030413 | orchestrator | 2026-03-25 05:11:12.030423 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-03-25 05:11:12.030432 | orchestrator | Wednesday 25 March 2026 05:10:34 +0000 (0:00:02.255) 0:02:51.421 ******* 2026-03-25 05:11:12.030442 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.030452 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.030461 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.030471 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.030484 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.030500 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.030534 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.030552 | orchestrator | 2026-03-25 05:11:12.030569 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-03-25 05:11:12.030587 | orchestrator | Wednesday 25 March 2026 05:10:36 +0000 (0:00:01.993) 0:02:53.414 ******* 2026-03-25 05:11:12.030603 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.030620 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:11:12.030638 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:11:12.030655 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:11:12.030674 | orchestrator | changed: [testbed-node-3] 2026-03-25 05:11:12.030689 | orchestrator | changed: [testbed-node-4] 2026-03-25 05:11:12.030733 | orchestrator | changed: [testbed-node-5] 2026-03-25 05:11:12.030751 | orchestrator | 2026-03-25 05:11:12.030761 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-03-25 05:11:12.030771 | orchestrator | Wednesday 25 March 2026 05:10:39 +0000 (0:00:03.072) 0:02:56.487 ******* 2026-03-25 05:11:12.030782 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-25 05:11:12.030793 | orchestrator | 2026-03-25 05:11:12.030802 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-03-25 05:11:12.030811 | orchestrator | Wednesday 25 March 2026 05:10:42 +0000 (0:00:02.915) 0:02:59.403 ******* 2026-03-25 05:11:12.030821 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.030830 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.030873 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.030883 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.030893 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.030902 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.030911 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.030921 | orchestrator | 2026-03-25 05:11:12.030931 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-03-25 05:11:12.030940 | orchestrator | Wednesday 25 March 2026 05:10:44 +0000 (0:00:01.960) 0:03:01.364 ******* 2026-03-25 05:11:12.030949 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.030959 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.030968 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.030978 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.030987 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.030996 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.031005 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.031015 | orchestrator | 2026-03-25 05:11:12.031024 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-03-25 05:11:12.031034 | orchestrator | Wednesday 25 March 2026 05:10:46 +0000 (0:00:02.533) 0:03:03.897 ******* 2026-03-25 05:11:12.031044 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.031059 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.031076 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.031093 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.031110 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.031126 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.031143 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.031159 | orchestrator | 2026-03-25 05:11:12.031176 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-03-25 05:11:12.031193 | orchestrator | Wednesday 25 March 2026 05:10:49 +0000 (0:00:02.126) 0:03:06.024 ******* 2026-03-25 05:11:12.031209 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.031225 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.031242 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.031258 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.031275 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.031285 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.031294 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.031304 | orchestrator | 2026-03-25 05:11:12.031339 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-03-25 05:11:12.031357 | orchestrator | Wednesday 25 March 2026 05:10:51 +0000 (0:00:02.427) 0:03:08.451 ******* 2026-03-25 05:11:12.031374 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.031391 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.031408 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.031425 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.031442 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.031458 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.031475 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.031505 | orchestrator | 2026-03-25 05:11:12.031523 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-03-25 05:11:12.031540 | orchestrator | Wednesday 25 March 2026 05:10:53 +0000 (0:00:02.178) 0:03:10.629 ******* 2026-03-25 05:11:12.031556 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.031573 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.031590 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.031606 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.031623 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.031640 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.031656 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.031673 | orchestrator | 2026-03-25 05:11:12.031690 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-03-25 05:11:12.031707 | orchestrator | Wednesday 25 March 2026 05:10:56 +0000 (0:00:02.544) 0:03:13.174 ******* 2026-03-25 05:11:12.031723 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.031740 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.031756 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.031771 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.031783 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.031793 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.031802 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.031812 | orchestrator | 2026-03-25 05:11:12.031821 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-03-25 05:11:12.031831 | orchestrator | Wednesday 25 March 2026 05:10:58 +0000 (0:00:02.120) 0:03:15.294 ******* 2026-03-25 05:11:12.031871 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.031880 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.031898 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.031907 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.031917 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.031926 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.031935 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.031945 | orchestrator | 2026-03-25 05:11:12.031954 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-03-25 05:11:12.031964 | orchestrator | Wednesday 25 March 2026 05:11:01 +0000 (0:00:02.788) 0:03:18.083 ******* 2026-03-25 05:11:12.031973 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.031983 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.031992 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.032001 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.032011 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.032020 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.032029 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.032039 | orchestrator | 2026-03-25 05:11:12.032048 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-03-25 05:11:12.032058 | orchestrator | Wednesday 25 March 2026 05:11:03 +0000 (0:00:02.255) 0:03:20.339 ******* 2026-03-25 05:11:12.032067 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.032076 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.032085 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.032095 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.032104 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.032113 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.032122 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.032132 | orchestrator | 2026-03-25 05:11:12.032141 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-03-25 05:11:12.032151 | orchestrator | Wednesday 25 March 2026 05:11:06 +0000 (0:00:02.918) 0:03:23.257 ******* 2026-03-25 05:11:12.032163 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.032180 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.032196 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.032226 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.032243 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.032260 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.032276 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.032292 | orchestrator | 2026-03-25 05:11:12.032307 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-03-25 05:11:12.032317 | orchestrator | Wednesday 25 March 2026 05:11:08 +0000 (0:00:02.644) 0:03:25.901 ******* 2026-03-25 05:11:12.032326 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.032336 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.032345 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.032355 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.032364 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:12.032373 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:12.032383 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:12.032392 | orchestrator | 2026-03-25 05:11:12.032401 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-03-25 05:11:12.032411 | orchestrator | Wednesday 25 March 2026 05:11:11 +0000 (0:00:02.195) 0:03:28.096 ******* 2026-03-25 05:11:12.032421 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:12.032430 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:12.032439 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:12.032450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 05:11:12.032462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 05:11:12.032471 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:12.032490 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 05:11:38.031176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 05:11:38.031298 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.031316 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 05:11:38.031331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 05:11:38.031345 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.031359 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:38.031373 | orchestrator | 2026-03-25 05:11:38.031388 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-03-25 05:11:38.031403 | orchestrator | Wednesday 25 March 2026 05:11:13 +0000 (0:00:02.304) 0:03:30.401 ******* 2026-03-25 05:11:38.031417 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:38.031431 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:38.031445 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:38.031459 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.031472 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.031486 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.031500 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:38.031514 | orchestrator | 2026-03-25 05:11:38.031527 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-03-25 05:11:38.031541 | orchestrator | Wednesday 25 March 2026 05:11:15 +0000 (0:00:02.575) 0:03:32.977 ******* 2026-03-25 05:11:38.031555 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:38.031569 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:38.031583 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:38.031596 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.031655 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.031670 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.031683 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:38.031697 | orchestrator | 2026-03-25 05:11:38.031712 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-03-25 05:11:38.031727 | orchestrator | Wednesday 25 March 2026 05:11:18 +0000 (0:00:02.162) 0:03:35.139 ******* 2026-03-25 05:11:38.031742 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:38.031757 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:38.031772 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:38.031787 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.031802 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.031817 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.031855 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:38.031870 | orchestrator | 2026-03-25 05:11:38.031885 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-03-25 05:11:38.031900 | orchestrator | Wednesday 25 March 2026 05:11:20 +0000 (0:00:02.069) 0:03:37.209 ******* 2026-03-25 05:11:38.031915 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:38.031929 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:38.031944 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:38.031959 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.031975 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.031990 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.032004 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:38.032019 | orchestrator | 2026-03-25 05:11:38.032035 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-03-25 05:11:38.032050 | orchestrator | Wednesday 25 March 2026 05:11:22 +0000 (0:00:02.239) 0:03:39.449 ******* 2026-03-25 05:11:38.032066 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:38.032079 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:38.032093 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:38.032106 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.032120 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.032133 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.032146 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:38.032160 | orchestrator | 2026-03-25 05:11:38.032174 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-03-25 05:11:38.032187 | orchestrator | Wednesday 25 March 2026 05:11:24 +0000 (0:00:02.193) 0:03:41.642 ******* 2026-03-25 05:11:38.032201 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:38.032214 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:38.032227 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:38.032241 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.032255 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.032268 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.032282 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:38.032295 | orchestrator | 2026-03-25 05:11:38.032309 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-03-25 05:11:38.032337 | orchestrator | Wednesday 25 March 2026 05:11:26 +0000 (0:00:02.031) 0:03:43.674 ******* 2026-03-25 05:11:38.032351 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:38.032365 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:38.032378 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:38.032402 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:38.032417 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 05:11:38.032431 | orchestrator | 2026-03-25 05:11:38.032445 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-03-25 05:11:38.032459 | orchestrator | Wednesday 25 March 2026 05:11:29 +0000 (0:00:02.529) 0:03:46.204 ******* 2026-03-25 05:11:38.032472 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:11:38.032500 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:11:38.032513 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:11:38.032527 | orchestrator | 2026-03-25 05:11:38.032541 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-03-25 05:11:38.032555 | orchestrator | Wednesday 25 March 2026 05:11:30 +0000 (0:00:01.480) 0:03:47.685 ******* 2026-03-25 05:11:38.032585 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 05:11:38.032600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 05:11:38.032613 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.032627 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 05:11:38.032641 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 05:11:38.032654 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.032668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 05:11:38.032681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 05:11:38.032695 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.032709 | orchestrator | 2026-03-25 05:11:38.032722 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-03-25 05:11:38.032736 | orchestrator | Wednesday 25 March 2026 05:11:32 +0000 (0:00:01.407) 0:03:49.093 ******* 2026-03-25 05:11:38.032758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:38.032774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:38.032788 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.032802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:38.032817 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:38.032873 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.032887 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:38.032901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:38.032925 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.032939 | orchestrator | 2026-03-25 05:11:38.032952 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-03-25 05:11:38.032966 | orchestrator | Wednesday 25 March 2026 05:11:33 +0000 (0:00:01.725) 0:03:50.818 ******* 2026-03-25 05:11:38.032980 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.032994 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.033008 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.033022 | orchestrator | 2026-03-25 05:11:38.033035 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-03-25 05:11:38.033049 | orchestrator | Wednesday 25 March 2026 05:11:35 +0000 (0:00:01.431) 0:03:52.250 ******* 2026-03-25 05:11:38.033063 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.033076 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:38.033090 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:38.033103 | orchestrator | 2026-03-25 05:11:38.033117 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-03-25 05:11:38.033131 | orchestrator | Wednesday 25 March 2026 05:11:36 +0000 (0:00:01.365) 0:03:53.615 ******* 2026-03-25 05:11:38.033144 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:38.033164 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:42.942328 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:42.942433 | orchestrator | 2026-03-25 05:11:42.942449 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-03-25 05:11:42.942461 | orchestrator | Wednesday 25 March 2026 05:11:38 +0000 (0:00:01.410) 0:03:55.026 ******* 2026-03-25 05:11:42.942472 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:42.942483 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:42.942495 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:42.942506 | orchestrator | 2026-03-25 05:11:42.942517 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-03-25 05:11:42.942528 | orchestrator | Wednesday 25 March 2026 05:11:39 +0000 (0:00:01.365) 0:03:56.391 ******* 2026-03-25 05:11:42.942539 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'}) 2026-03-25 05:11:42.942551 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'}) 2026-03-25 05:11:42.942561 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'}) 2026-03-25 05:11:42.942572 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'}) 2026-03-25 05:11:42.942583 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'}) 2026-03-25 05:11:42.942612 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'}) 2026-03-25 05:11:42.942623 | orchestrator | 2026-03-25 05:11:42.942634 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-03-25 05:11:42.942645 | orchestrator | Wednesday 25 March 2026 05:11:41 +0000 (0:00:02.101) 0:03:58.493 ******* 2026-03-25 05:11:42.942662 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-a7f517e2-016b-5c10-ac21-20c48339115f/osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 958, 'dev': 6, 'nlink': 1, 'atime': 1774407710.4636638, 'mtime': 1774407710.4556637, 'ctime': 1774407710.4556637, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-a7f517e2-016b-5c10-ac21-20c48339115f/osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:42.942719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-2eb637af-fcba-56ed-b416-856a8f376a6e/osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 968, 'dev': 6, 'nlink': 1, 'atime': 1774407730.6949663, 'mtime': 1774407730.6889663, 'ctime': 1774407730.6889663, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-2eb637af-fcba-56ed-b416-856a8f376a6e/osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:42.942734 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:42.942752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-82366886-ea97-5dba-b5cd-187414e0593f/osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1774407707.9469202, 'mtime': 1774407707.9439201, 'ctime': 1774407707.9439201, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-82366886-ea97-5dba-b5cd-187414e0593f/osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:42.942765 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138/osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1774407725.9151893, 'mtime': 1774407725.9101892, 'ctime': 1774407725.9101892, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138/osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:42.942784 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:42.942803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060/osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1774407707.7164724, 'mtime': 1774407707.710472, 'ctime': 1774407707.710472, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060/osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.026402 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8ec576d5-4336-523a-896e-5358117b2269/osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1774407725.7158706, 'mtime': 1774407725.7128706, 'ctime': 1774407725.7128706, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8ec576d5-4336-523a-896e-5358117b2269/osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.026538 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:49.026566 | orchestrator | 2026-03-25 05:11:49.026587 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-03-25 05:11:49.026609 | orchestrator | Wednesday 25 March 2026 05:11:42 +0000 (0:00:01.455) 0:03:59.948 ******* 2026-03-25 05:11:49.026629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 05:11:49.026664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 05:11:49.026675 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:49.026686 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 05:11:49.026697 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 05:11:49.026708 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:49.026719 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 05:11:49.026730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 05:11:49.026740 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:49.026751 | orchestrator | 2026-03-25 05:11:49.026762 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-03-25 05:11:49.026774 | orchestrator | Wednesday 25 March 2026 05:11:44 +0000 (0:00:01.538) 0:04:01.486 ******* 2026-03-25 05:11:49.026786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.026800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.026811 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:49.026853 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.026888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.026900 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:49.026920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.026941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.026962 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:49.026980 | orchestrator | 2026-03-25 05:11:49.027000 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-03-25 05:11:49.027019 | orchestrator | Wednesday 25 March 2026 05:11:45 +0000 (0:00:01.436) 0:04:02.923 ******* 2026-03-25 05:11:49.027054 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'})  2026-03-25 05:11:49.027078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'})  2026-03-25 05:11:49.027099 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:49.027126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'})  2026-03-25 05:11:49.027140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'})  2026-03-25 05:11:49.027152 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:49.027170 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'})  2026-03-25 05:11:49.027188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'})  2026-03-25 05:11:49.027207 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:49.027225 | orchestrator | 2026-03-25 05:11:49.027243 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-03-25 05:11:49.027262 | orchestrator | Wednesday 25 March 2026 05:11:47 +0000 (0:00:01.642) 0:04:04.565 ******* 2026-03-25 05:11:49.027283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-a7f517e2-016b-5c10-ac21-20c48339115f', 'data_vg': 'ceph-a7f517e2-016b-5c10-ac21-20c48339115f'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.027303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-2eb637af-fcba-56ed-b416-856a8f376a6e', 'data_vg': 'ceph-2eb637af-fcba-56ed-b416-856a8f376a6e'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.027321 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:49.027335 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-82366886-ea97-5dba-b5cd-187414e0593f', 'data_vg': 'ceph-82366886-ea97-5dba-b5cd-187414e0593f'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.027346 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-fa1f2bca-96f4-5f59-9dac-c3efdd146138', 'data_vg': 'ceph-fa1f2bca-96f4-5f59-9dac-c3efdd146138'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.027357 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:49.027368 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f303e98e-56ea-50bc-9e1c-3ccda4672060', 'data_vg': 'ceph-f303e98e-56ea-50bc-9e1c-3ccda4672060'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:49.027390 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8ec576d5-4336-523a-896e-5358117b2269', 'data_vg': 'ceph-8ec576d5-4336-523a-896e-5358117b2269'}, 'ansible_loop_var': 'item'})  2026-03-25 05:11:58.716353 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:58.716463 | orchestrator | 2026-03-25 05:11:58.716479 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-03-25 05:11:58.716517 | orchestrator | Wednesday 25 March 2026 05:11:48 +0000 (0:00:01.442) 0:04:06.008 ******* 2026-03-25 05:11:58.716529 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:58.716540 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:58.716551 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:58.716561 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:58.716572 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:58.716582 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:58.716593 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:58.716604 | orchestrator | 2026-03-25 05:11:58.716614 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-03-25 05:11:58.716626 | orchestrator | Wednesday 25 March 2026 05:11:50 +0000 (0:00:01.954) 0:04:07.962 ******* 2026-03-25 05:11:58.716636 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:58.716648 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:58.716659 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:11:58.716669 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:11:58.716681 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 05:11:58.716692 | orchestrator | 2026-03-25 05:11:58.716702 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-03-25 05:11:58.716713 | orchestrator | Wednesday 25 March 2026 05:11:53 +0000 (0:00:02.587) 0:04:10.549 ******* 2026-03-25 05:11:58.716740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716923 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:58.716936 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:58.716948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.716998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717019 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:58.717031 | orchestrator | 2026-03-25 05:11:58.717043 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-03-25 05:11:58.717056 | orchestrator | Wednesday 25 March 2026 05:11:55 +0000 (0:00:01.491) 0:04:12.041 ******* 2026-03-25 05:11:58.717068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717148 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:58.717160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717222 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:58.717234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717296 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:58.717307 | orchestrator | 2026-03-25 05:11:58.717318 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-03-25 05:11:58.717328 | orchestrator | Wednesday 25 March 2026 05:11:56 +0000 (0:00:01.820) 0:04:13.862 ******* 2026-03-25 05:11:58.717339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717399 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:11:58.717410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717463 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:11:58.717474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 05:11:58.717527 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:11:58.717538 | orchestrator | 2026-03-25 05:11:58.717549 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-03-25 05:11:58.717560 | orchestrator | Wednesday 25 March 2026 05:11:58 +0000 (0:00:01.453) 0:04:15.315 ******* 2026-03-25 05:11:58.717570 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:11:58.717581 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:11:58.717598 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:14.517779 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:14.517911 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:14.517923 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:14.517932 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:14.517941 | orchestrator | 2026-03-25 05:12:14.517950 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-03-25 05:12:14.517960 | orchestrator | Wednesday 25 March 2026 05:12:00 +0000 (0:00:01.905) 0:04:17.221 ******* 2026-03-25 05:12:14.517967 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:14.517975 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:14.517983 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:14.517991 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:14.517999 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:14.518007 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:14.518062 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:14.518071 | orchestrator | 2026-03-25 05:12:14.518079 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-03-25 05:12:14.518088 | orchestrator | Wednesday 25 March 2026 05:12:02 +0000 (0:00:02.260) 0:04:19.481 ******* 2026-03-25 05:12:14.518096 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:14.518103 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:14.518111 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:14.518119 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:14.518127 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:14.518135 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:14.518143 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:14.518168 | orchestrator | 2026-03-25 05:12:14.518177 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-03-25 05:12:14.518198 | orchestrator | Wednesday 25 March 2026 05:12:04 +0000 (0:00:02.176) 0:04:21.658 ******* 2026-03-25 05:12:14.518206 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:14.518214 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:14.518221 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:14.518229 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:14.518237 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:14.518245 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:14.518252 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:14.518260 | orchestrator | 2026-03-25 05:12:14.518268 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-03-25 05:12:14.518277 | orchestrator | Wednesday 25 March 2026 05:12:06 +0000 (0:00:01.976) 0:04:23.634 ******* 2026-03-25 05:12:14.518285 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:14.518293 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:14.518300 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:14.518308 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:14.518316 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:14.518324 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:14.518337 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:14.518351 | orchestrator | 2026-03-25 05:12:14.518364 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-03-25 05:12:14.518385 | orchestrator | Wednesday 25 March 2026 05:12:08 +0000 (0:00:02.317) 0:04:25.951 ******* 2026-03-25 05:12:14.518400 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:14.518414 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:14.518428 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:14.518442 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:14.518455 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:14.518469 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:14.518482 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:14.518496 | orchestrator | 2026-03-25 05:12:14.518510 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-03-25 05:12:14.518524 | orchestrator | Wednesday 25 March 2026 05:12:11 +0000 (0:00:02.206) 0:04:28.158 ******* 2026-03-25 05:12:14.518539 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:14.518552 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:14.518562 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:14.518570 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:14.518579 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:14.518588 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:14.518597 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:14.518606 | orchestrator | 2026-03-25 05:12:14.518615 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-03-25 05:12:14.518624 | orchestrator | Wednesday 25 March 2026 05:12:13 +0000 (0:00:02.315) 0:04:30.473 ******* 2026-03-25 05:12:14.518634 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:14.518645 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:14.518655 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:14.518665 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:14.518675 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:14.518695 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:14.518703 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:14.518725 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:14.518734 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:14.518742 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:14.518750 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:14.518758 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:14.518765 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:14.518780 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:14.518788 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:14.518795 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:14.518803 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:14.518811 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:14.518847 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:14.518856 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:14.518863 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:14.518872 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:14.518880 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:14.518887 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:14.518896 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:14.518904 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:14.518917 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:14.518925 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:14.518933 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:14.518941 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:14.518955 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:19.294355 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:19.294452 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:19.294468 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:19.294480 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:19.294491 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:19.294502 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:19.294530 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:19.294541 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:19.294550 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:19.294560 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:19.294569 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:19.294579 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:19.294589 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:19.294598 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:19.294608 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:19.294638 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:19.294648 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:19.294658 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:19.294667 | orchestrator | 2026-03-25 05:12:19.294678 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-03-25 05:12:19.294689 | orchestrator | Wednesday 25 March 2026 05:12:15 +0000 (0:00:02.476) 0:04:32.950 ******* 2026-03-25 05:12:19.294698 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:19.294707 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:19.294717 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:19.294726 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:19.294735 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:19.294745 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:19.294754 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:19.294764 | orchestrator | 2026-03-25 05:12:19.294773 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-03-25 05:12:19.294783 | orchestrator | Wednesday 25 March 2026 05:12:18 +0000 (0:00:02.290) 0:04:35.240 ******* 2026-03-25 05:12:19.294793 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:19.294803 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:19.294836 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:19.294847 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:19.294872 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:19.294885 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:19.294896 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:19.294907 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:19.294918 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:19.294929 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:19.294946 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:19.294957 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:19.294983 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:19.294994 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:19.295015 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:19.295034 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:19.295045 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:19.295056 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:19.295067 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:19.295079 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:19.295090 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:19.295101 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:19.295112 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:19.295123 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:19.295134 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:19.295145 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:19.295157 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:19.295168 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:19.295179 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:19.295197 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:49.760217 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:49.760369 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:49.760387 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:49.760400 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:49.760433 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:49.760475 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:49.760487 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:49.760498 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:49.760511 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:49.760522 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-25 05:12:49.760533 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-25 05:12:49.760543 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-25 05:12:49.760554 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-25 05:12:49.760565 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:49.760576 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:49.760586 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:49.760597 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-25 05:12:49.760608 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-25 05:12:49.760618 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:49.760629 | orchestrator | 2026-03-25 05:12:49.760642 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-03-25 05:12:49.760655 | orchestrator | Wednesday 25 March 2026 05:12:20 +0000 (0:00:02.321) 0:04:37.561 ******* 2026-03-25 05:12:49.760666 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:49.760677 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:49.760687 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:49.760698 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:49.760709 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:49.760721 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:49.760733 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:49.760745 | orchestrator | 2026-03-25 05:12:49.760757 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-03-25 05:12:49.760769 | orchestrator | Wednesday 25 March 2026 05:12:22 +0000 (0:00:02.215) 0:04:39.776 ******* 2026-03-25 05:12:49.760781 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:49.760793 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:49.760805 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:49.760885 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:49.760898 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:49.760909 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:49.760919 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:49.760939 | orchestrator | 2026-03-25 05:12:49.760950 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-03-25 05:12:49.760982 | orchestrator | Wednesday 25 March 2026 05:12:25 +0000 (0:00:02.296) 0:04:42.073 ******* 2026-03-25 05:12:49.760993 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:49.761004 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:49.761015 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:49.761025 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:49.761035 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:49.761046 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:49.761056 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:49.761067 | orchestrator | 2026-03-25 05:12:49.761078 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-25 05:12:49.761088 | orchestrator | Wednesday 25 March 2026 05:12:27 +0000 (0:00:02.629) 0:04:44.702 ******* 2026-03-25 05:12:49.761099 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-25 05:12:49.761113 | orchestrator | 2026-03-25 05:12:49.761124 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-03-25 05:12:49.761134 | orchestrator | Wednesday 25 March 2026 05:12:30 +0000 (0:00:02.995) 0:04:47.698 ******* 2026-03-25 05:12:49.761145 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-25 05:12:49.761163 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-25 05:12:49.761174 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-25 05:12:49.761184 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-25 05:12:49.761195 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-25 05:12:49.761205 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-25 05:12:49.761216 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-25 05:12:49.761227 | orchestrator | 2026-03-25 05:12:49.761237 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-03-25 05:12:49.761248 | orchestrator | Wednesday 25 March 2026 05:12:32 +0000 (0:00:02.180) 0:04:49.878 ******* 2026-03-25 05:12:49.761258 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:49.761269 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:49.761280 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:49.761290 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:49.761301 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:49.761312 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:49.761323 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:49.761334 | orchestrator | 2026-03-25 05:12:49.761345 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-03-25 05:12:49.761356 | orchestrator | Wednesday 25 March 2026 05:12:35 +0000 (0:00:02.317) 0:04:52.196 ******* 2026-03-25 05:12:49.761366 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:49.761377 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:49.761387 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:49.761398 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:49.761409 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:49.761419 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:49.761430 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:49.761440 | orchestrator | 2026-03-25 05:12:49.761451 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-03-25 05:12:49.761462 | orchestrator | Wednesday 25 March 2026 05:12:37 +0000 (0:00:02.163) 0:04:54.359 ******* 2026-03-25 05:12:49.761472 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:12:49.761491 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:12:49.761501 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:12:49.761512 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:12:49.761523 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:12:49.761533 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:12:49.761544 | orchestrator | ok: [testbed-manager] 2026-03-25 05:12:49.761554 | orchestrator | 2026-03-25 05:12:49.761565 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-03-25 05:12:49.761576 | orchestrator | Wednesday 25 March 2026 05:12:39 +0000 (0:00:02.605) 0:04:56.965 ******* 2026-03-25 05:12:49.761586 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:49.761597 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:49.761607 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:49.761618 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:49.761628 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:49.761639 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:49.761649 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:49.761660 | orchestrator | 2026-03-25 05:12:49.761671 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-25 05:12:49.761681 | orchestrator | Wednesday 25 March 2026 05:12:42 +0000 (0:00:02.380) 0:04:59.346 ******* 2026-03-25 05:12:49.761692 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:49.761703 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:12:49.761713 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:12:49.761724 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:12:49.761734 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:12:49.761745 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:12:49.761755 | orchestrator | skipping: [testbed-manager] 2026-03-25 05:12:49.761765 | orchestrator | 2026-03-25 05:12:49.761776 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-03-25 05:12:49.761787 | orchestrator | Wednesday 25 March 2026 05:12:44 +0000 (0:00:02.454) 0:05:01.800 ******* 2026-03-25 05:12:49.761797 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:12:49.761826 | orchestrator | 2026-03-25 05:12:49.761837 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-03-25 05:12:49.761848 | orchestrator | Wednesday 25 March 2026 05:12:47 +0000 (0:00:02.719) 0:05:04.519 ******* 2026-03-25 05:12:49.761859 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:12:49.761869 | orchestrator | 2026-03-25 05:12:49.761880 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-03-25 05:12:49.761891 | orchestrator | 2026-03-25 05:12:49.761909 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:13:30.055512 | orchestrator | Wednesday 25 March 2026 05:12:49 +0000 (0:00:02.240) 0:05:06.760 ******* 2026-03-25 05:13:30.055626 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.055640 | orchestrator | 2026-03-25 05:13:30.055648 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:13:30.055655 | orchestrator | Wednesday 25 March 2026 05:12:51 +0000 (0:00:01.507) 0:05:08.267 ******* 2026-03-25 05:13:30.055662 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.055668 | orchestrator | 2026-03-25 05:13:30.055675 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-03-25 05:13:30.055682 | orchestrator | Wednesday 25 March 2026 05:12:52 +0000 (0:00:01.166) 0:05:09.434 ******* 2026-03-25 05:13:30.055693 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-25 05:13:30.055718 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-25 05:13:30.055747 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-25 05:13:30.055755 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-25 05:13:30.055763 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-25 05:13:30.055770 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}])  2026-03-25 05:13:30.055778 | orchestrator | 2026-03-25 05:13:30.055785 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-25 05:13:30.055791 | orchestrator | 2026-03-25 05:13:30.055797 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-25 05:13:30.055861 | orchestrator | Wednesday 25 March 2026 05:13:02 +0000 (0:00:10.551) 0:05:19.986 ******* 2026-03-25 05:13:30.055868 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.055874 | orchestrator | 2026-03-25 05:13:30.055880 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-25 05:13:30.055887 | orchestrator | Wednesday 25 March 2026 05:13:04 +0000 (0:00:01.562) 0:05:21.548 ******* 2026-03-25 05:13:30.055893 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.055899 | orchestrator | 2026-03-25 05:13:30.055905 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-25 05:13:30.055912 | orchestrator | Wednesday 25 March 2026 05:13:05 +0000 (0:00:01.301) 0:05:22.849 ******* 2026-03-25 05:13:30.055919 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:30.055927 | orchestrator | 2026-03-25 05:13:30.055933 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-25 05:13:30.055937 | orchestrator | Wednesday 25 March 2026 05:13:07 +0000 (0:00:01.179) 0:05:24.029 ******* 2026-03-25 05:13:30.055940 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.055944 | orchestrator | 2026-03-25 05:13:30.055948 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:13:30.055951 | orchestrator | Wednesday 25 March 2026 05:13:08 +0000 (0:00:01.212) 0:05:25.241 ******* 2026-03-25 05:13:30.055955 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-25 05:13:30.055959 | orchestrator | 2026-03-25 05:13:30.055962 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:13:30.055966 | orchestrator | Wednesday 25 March 2026 05:13:09 +0000 (0:00:01.187) 0:05:26.428 ******* 2026-03-25 05:13:30.055984 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.055988 | orchestrator | 2026-03-25 05:13:30.055992 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:13:30.055995 | orchestrator | Wednesday 25 March 2026 05:13:10 +0000 (0:00:01.522) 0:05:27.951 ******* 2026-03-25 05:13:30.056005 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.056008 | orchestrator | 2026-03-25 05:13:30.056012 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:13:30.056016 | orchestrator | Wednesday 25 March 2026 05:13:12 +0000 (0:00:01.147) 0:05:29.099 ******* 2026-03-25 05:13:30.056020 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.056023 | orchestrator | 2026-03-25 05:13:30.056027 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:13:30.056031 | orchestrator | Wednesday 25 March 2026 05:13:13 +0000 (0:00:01.480) 0:05:30.579 ******* 2026-03-25 05:13:30.056035 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.056038 | orchestrator | 2026-03-25 05:13:30.056042 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:13:30.056046 | orchestrator | Wednesday 25 March 2026 05:13:14 +0000 (0:00:01.183) 0:05:31.763 ******* 2026-03-25 05:13:30.056050 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.056054 | orchestrator | 2026-03-25 05:13:30.056063 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:13:30.056068 | orchestrator | Wednesday 25 March 2026 05:13:15 +0000 (0:00:01.182) 0:05:32.945 ******* 2026-03-25 05:13:30.056072 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.056076 | orchestrator | 2026-03-25 05:13:30.056081 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:13:30.056086 | orchestrator | Wednesday 25 March 2026 05:13:17 +0000 (0:00:01.170) 0:05:34.116 ******* 2026-03-25 05:13:30.056091 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:30.056095 | orchestrator | 2026-03-25 05:13:30.056100 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:13:30.056104 | orchestrator | Wednesday 25 March 2026 05:13:18 +0000 (0:00:01.151) 0:05:35.267 ******* 2026-03-25 05:13:30.056108 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.056113 | orchestrator | 2026-03-25 05:13:30.056117 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:13:30.056121 | orchestrator | Wednesday 25 March 2026 05:13:19 +0000 (0:00:01.122) 0:05:36.390 ******* 2026-03-25 05:13:30.056126 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:13:30.056131 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:13:30.056138 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:13:30.056144 | orchestrator | 2026-03-25 05:13:30.056150 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:13:30.056156 | orchestrator | Wednesday 25 March 2026 05:13:21 +0000 (0:00:01.644) 0:05:38.035 ******* 2026-03-25 05:13:30.056162 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:30.056168 | orchestrator | 2026-03-25 05:13:30.056174 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:13:30.056180 | orchestrator | Wednesday 25 March 2026 05:13:22 +0000 (0:00:01.259) 0:05:39.295 ******* 2026-03-25 05:13:30.056186 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:13:30.056191 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:13:30.056198 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:13:30.056204 | orchestrator | 2026-03-25 05:13:30.056210 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:13:30.056217 | orchestrator | Wednesday 25 March 2026 05:13:25 +0000 (0:00:03.221) 0:05:42.516 ******* 2026-03-25 05:13:30.056224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:13:30.056233 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:13:30.056240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:13:30.056246 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:30.056250 | orchestrator | 2026-03-25 05:13:30.056255 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:13:30.056263 | orchestrator | Wednesday 25 March 2026 05:13:26 +0000 (0:00:01.443) 0:05:43.959 ******* 2026-03-25 05:13:30.056270 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:13:30.056277 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:13:30.056282 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:13:30.056286 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:30.056290 | orchestrator | 2026-03-25 05:13:30.056295 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:13:30.056299 | orchestrator | Wednesday 25 March 2026 05:13:28 +0000 (0:00:01.944) 0:05:45.904 ******* 2026-03-25 05:13:30.056310 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:13:50.186581 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:13:50.186711 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:13:50.186727 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.186739 | orchestrator | 2026-03-25 05:13:50.186750 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:13:50.186761 | orchestrator | Wednesday 25 March 2026 05:13:30 +0000 (0:00:01.155) 0:05:47.060 ******* 2026-03-25 05:13:50.186773 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '928ffe0e6efa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:13:22.865032', 'end': '2026-03-25 05:13:22.921087', 'delta': '0:00:00.056055', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['928ffe0e6efa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:13:50.186786 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cb4e3d9a68a8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:13:23.443014', 'end': '2026-03-25 05:13:23.488934', 'delta': '0:00:00.045920', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cb4e3d9a68a8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:13:50.186907 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '90e526f29e10', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:13:24.277190', 'end': '2026-03-25 05:13:24.323102', 'delta': '0:00:00.045912', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90e526f29e10'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:13:50.186920 | orchestrator | 2026-03-25 05:13:50.186931 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:13:50.186940 | orchestrator | Wednesday 25 March 2026 05:13:31 +0000 (0:00:01.177) 0:05:48.238 ******* 2026-03-25 05:13:50.186951 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:50.186961 | orchestrator | 2026-03-25 05:13:50.186971 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:13:50.186981 | orchestrator | Wednesday 25 March 2026 05:13:32 +0000 (0:00:01.609) 0:05:49.847 ******* 2026-03-25 05:13:50.186990 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.186999 | orchestrator | 2026-03-25 05:13:50.187009 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:13:50.187019 | orchestrator | Wednesday 25 March 2026 05:13:34 +0000 (0:00:01.262) 0:05:51.110 ******* 2026-03-25 05:13:50.187028 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:50.187038 | orchestrator | 2026-03-25 05:13:50.187047 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:13:50.187057 | orchestrator | Wednesday 25 March 2026 05:13:35 +0000 (0:00:01.145) 0:05:52.255 ******* 2026-03-25 05:13:50.187084 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-25 05:13:50.187095 | orchestrator | 2026-03-25 05:13:50.187104 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:13:50.187114 | orchestrator | Wednesday 25 March 2026 05:13:37 +0000 (0:00:02.023) 0:05:54.279 ******* 2026-03-25 05:13:50.187125 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:13:50.187136 | orchestrator | 2026-03-25 05:13:50.187146 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:13:50.187157 | orchestrator | Wednesday 25 March 2026 05:13:38 +0000 (0:00:01.225) 0:05:55.504 ******* 2026-03-25 05:13:50.187169 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.187179 | orchestrator | 2026-03-25 05:13:50.187189 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:13:50.187199 | orchestrator | Wednesday 25 March 2026 05:13:39 +0000 (0:00:01.166) 0:05:56.671 ******* 2026-03-25 05:13:50.187208 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.187217 | orchestrator | 2026-03-25 05:13:50.187233 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:13:50.187243 | orchestrator | Wednesday 25 March 2026 05:13:40 +0000 (0:00:01.270) 0:05:57.941 ******* 2026-03-25 05:13:50.187252 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.187262 | orchestrator | 2026-03-25 05:13:50.187271 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:13:50.187281 | orchestrator | Wednesday 25 March 2026 05:13:42 +0000 (0:00:01.143) 0:05:59.084 ******* 2026-03-25 05:13:50.187291 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.187308 | orchestrator | 2026-03-25 05:13:50.187317 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:13:50.187327 | orchestrator | Wednesday 25 March 2026 05:13:43 +0000 (0:00:01.129) 0:06:00.214 ******* 2026-03-25 05:13:50.187336 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.187346 | orchestrator | 2026-03-25 05:13:50.187355 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:13:50.187365 | orchestrator | Wednesday 25 March 2026 05:13:44 +0000 (0:00:01.145) 0:06:01.359 ******* 2026-03-25 05:13:50.187374 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.187384 | orchestrator | 2026-03-25 05:13:50.187393 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:13:50.187402 | orchestrator | Wednesday 25 March 2026 05:13:45 +0000 (0:00:01.111) 0:06:02.471 ******* 2026-03-25 05:13:50.187409 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.187417 | orchestrator | 2026-03-25 05:13:50.187425 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:13:50.187433 | orchestrator | Wednesday 25 March 2026 05:13:46 +0000 (0:00:01.125) 0:06:03.597 ******* 2026-03-25 05:13:50.187440 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.187448 | orchestrator | 2026-03-25 05:13:50.187456 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:13:50.187464 | orchestrator | Wednesday 25 March 2026 05:13:47 +0000 (0:00:01.143) 0:06:04.740 ******* 2026-03-25 05:13:50.187472 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:50.187480 | orchestrator | 2026-03-25 05:13:50.187487 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:13:50.187495 | orchestrator | Wednesday 25 March 2026 05:13:48 +0000 (0:00:01.158) 0:06:05.899 ******* 2026-03-25 05:13:50.187503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:13:50.187512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:13:50.187520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:13:50.187529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:13:50.187544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:13:51.459475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:13:51.459592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:13:51.459614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '225bc811', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:13:51.459632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:13:51.459644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:13:51.459679 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:13:51.459692 | orchestrator | 2026-03-25 05:13:51.459704 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:13:51.459716 | orchestrator | Wednesday 25 March 2026 05:13:50 +0000 (0:00:01.286) 0:06:07.185 ******* 2026-03-25 05:13:51.459756 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:13:51.459771 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:13:51.459782 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:13:51.459795 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:13:51.459858 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:13:51.459870 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:13:51.459910 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:14:15.659972 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '225bc811', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:14:15.660123 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:14:15.660152 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:14:15.660190 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:14:15.660203 | orchestrator | 2026-03-25 05:14:15.660214 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:14:15.660226 | orchestrator | Wednesday 25 March 2026 05:13:51 +0000 (0:00:01.277) 0:06:08.463 ******* 2026-03-25 05:14:15.660236 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:14:15.660247 | orchestrator | 2026-03-25 05:14:15.660257 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:14:15.660266 | orchestrator | Wednesday 25 March 2026 05:13:52 +0000 (0:00:01.509) 0:06:09.973 ******* 2026-03-25 05:14:15.660276 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:14:15.660285 | orchestrator | 2026-03-25 05:14:15.660310 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:14:15.660337 | orchestrator | Wednesday 25 March 2026 05:13:54 +0000 (0:00:01.176) 0:06:11.149 ******* 2026-03-25 05:14:15.660348 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:14:15.660357 | orchestrator | 2026-03-25 05:14:15.660367 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:14:15.660377 | orchestrator | Wednesday 25 March 2026 05:13:55 +0000 (0:00:01.509) 0:06:12.658 ******* 2026-03-25 05:14:15.660387 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:14:15.660396 | orchestrator | 2026-03-25 05:14:15.660406 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:14:15.660415 | orchestrator | Wednesday 25 March 2026 05:13:56 +0000 (0:00:01.133) 0:06:13.792 ******* 2026-03-25 05:14:15.660425 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:14:15.660434 | orchestrator | 2026-03-25 05:14:15.660445 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:14:15.660456 | orchestrator | Wednesday 25 March 2026 05:13:58 +0000 (0:00:01.280) 0:06:15.073 ******* 2026-03-25 05:14:15.660467 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:14:15.660478 | orchestrator | 2026-03-25 05:14:15.660489 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:14:15.660500 | orchestrator | Wednesday 25 March 2026 05:13:59 +0000 (0:00:01.233) 0:06:16.307 ******* 2026-03-25 05:14:15.660511 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:14:15.660522 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-25 05:14:15.660533 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-25 05:14:15.660543 | orchestrator | 2026-03-25 05:14:15.660554 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:14:15.660565 | orchestrator | Wednesday 25 March 2026 05:14:01 +0000 (0:00:01.976) 0:06:18.283 ******* 2026-03-25 05:14:15.660576 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:14:15.660587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:14:15.660598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:14:15.660609 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:14:15.660619 | orchestrator | 2026-03-25 05:14:15.660631 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:14:15.660642 | orchestrator | Wednesday 25 March 2026 05:14:02 +0000 (0:00:01.163) 0:06:19.447 ******* 2026-03-25 05:14:15.660653 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:14:15.660663 | orchestrator | 2026-03-25 05:14:15.660674 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:14:15.660685 | orchestrator | Wednesday 25 March 2026 05:14:03 +0000 (0:00:01.116) 0:06:20.563 ******* 2026-03-25 05:14:15.660704 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:14:15.660715 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:14:15.660727 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:14:15.660738 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:14:15.660749 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:14:15.660760 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:14:15.660771 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:14:15.660782 | orchestrator | 2026-03-25 05:14:15.660793 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:14:15.660829 | orchestrator | Wednesday 25 March 2026 05:14:05 +0000 (0:00:02.123) 0:06:22.687 ******* 2026-03-25 05:14:15.660840 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:14:15.660850 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:14:15.660860 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:14:15.660869 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:14:15.660879 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:14:15.660888 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:14:15.660898 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:14:15.660907 | orchestrator | 2026-03-25 05:14:15.660917 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-25 05:14:15.660926 | orchestrator | Wednesday 25 March 2026 05:14:08 +0000 (0:00:03.010) 0:06:25.698 ******* 2026-03-25 05:14:15.660936 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-25 05:14:15.660945 | orchestrator | 2026-03-25 05:14:15.660955 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-25 05:14:15.660965 | orchestrator | Wednesday 25 March 2026 05:14:10 +0000 (0:00:02.196) 0:06:27.894 ******* 2026-03-25 05:14:15.660974 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:14:15.660984 | orchestrator | 2026-03-25 05:14:15.660993 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-25 05:14:15.661003 | orchestrator | Wednesday 25 March 2026 05:14:12 +0000 (0:00:01.238) 0:06:29.133 ******* 2026-03-25 05:14:15.661012 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:14:15.661022 | orchestrator | 2026-03-25 05:14:15.661031 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-25 05:14:15.661041 | orchestrator | Wednesday 25 March 2026 05:14:13 +0000 (0:00:01.150) 0:06:30.284 ******* 2026-03-25 05:14:15.661051 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-25 05:14:15.661060 | orchestrator | 2026-03-25 05:14:15.661076 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-25 05:14:15.661092 | orchestrator | Wednesday 25 March 2026 05:14:15 +0000 (0:00:02.378) 0:06:32.663 ******* 2026-03-25 05:15:17.682548 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.682700 | orchestrator | 2026-03-25 05:15:17.682729 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-25 05:15:17.682752 | orchestrator | Wednesday 25 March 2026 05:14:16 +0000 (0:00:01.124) 0:06:33.787 ******* 2026-03-25 05:15:17.682772 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:15:17.682791 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:15:17.682804 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:15:17.682936 | orchestrator | 2026-03-25 05:15:17.682958 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-25 05:15:17.682976 | orchestrator | Wednesday 25 March 2026 05:14:19 +0000 (0:00:02.468) 0:06:36.256 ******* 2026-03-25 05:15:17.682988 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-03-25 05:15:17.682999 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-03-25 05:15:17.683011 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-03-25 05:15:17.683022 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-03-25 05:15:17.683033 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-03-25 05:15:17.683044 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-03-25 05:15:17.683055 | orchestrator | 2026-03-25 05:15:17.683068 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-25 05:15:17.683080 | orchestrator | Wednesday 25 March 2026 05:14:32 +0000 (0:00:13.398) 0:06:49.654 ******* 2026-03-25 05:15:17.683093 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:15:17.683105 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:15:17.683118 | orchestrator | 2026-03-25 05:15:17.683131 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-25 05:15:17.683142 | orchestrator | Wednesday 25 March 2026 05:14:36 +0000 (0:00:04.117) 0:06:53.772 ******* 2026-03-25 05:15:17.683153 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:15:17.683164 | orchestrator | 2026-03-25 05:15:17.683174 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:15:17.683185 | orchestrator | Wednesday 25 March 2026 05:14:39 +0000 (0:00:02.529) 0:06:56.302 ******* 2026-03-25 05:15:17.683196 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-25 05:15:17.683206 | orchestrator | 2026-03-25 05:15:17.683217 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 05:15:17.683228 | orchestrator | Wednesday 25 March 2026 05:14:40 +0000 (0:00:01.485) 0:06:57.788 ******* 2026-03-25 05:15:17.683238 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-25 05:15:17.683249 | orchestrator | 2026-03-25 05:15:17.683259 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 05:15:17.683270 | orchestrator | Wednesday 25 March 2026 05:14:42 +0000 (0:00:01.616) 0:06:59.405 ******* 2026-03-25 05:15:17.683281 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:15:17.683292 | orchestrator | 2026-03-25 05:15:17.683302 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 05:15:17.683314 | orchestrator | Wednesday 25 March 2026 05:14:43 +0000 (0:00:01.577) 0:07:00.982 ******* 2026-03-25 05:15:17.683325 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.683335 | orchestrator | 2026-03-25 05:15:17.683346 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 05:15:17.683357 | orchestrator | Wednesday 25 March 2026 05:14:45 +0000 (0:00:01.130) 0:07:02.113 ******* 2026-03-25 05:15:17.683367 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.683378 | orchestrator | 2026-03-25 05:15:17.683388 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 05:15:17.683399 | orchestrator | Wednesday 25 March 2026 05:14:46 +0000 (0:00:01.145) 0:07:03.258 ******* 2026-03-25 05:15:17.683410 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.683420 | orchestrator | 2026-03-25 05:15:17.683431 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 05:15:17.683441 | orchestrator | Wednesday 25 March 2026 05:14:47 +0000 (0:00:01.184) 0:07:04.443 ******* 2026-03-25 05:15:17.683461 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:15:17.683472 | orchestrator | 2026-03-25 05:15:17.683483 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 05:15:17.683494 | orchestrator | Wednesday 25 March 2026 05:14:49 +0000 (0:00:01.577) 0:07:06.020 ******* 2026-03-25 05:15:17.683504 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.683515 | orchestrator | 2026-03-25 05:15:17.683526 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 05:15:17.683537 | orchestrator | Wednesday 25 March 2026 05:14:50 +0000 (0:00:01.115) 0:07:07.135 ******* 2026-03-25 05:15:17.683547 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.683558 | orchestrator | 2026-03-25 05:15:17.683569 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 05:15:17.683579 | orchestrator | Wednesday 25 March 2026 05:14:51 +0000 (0:00:01.203) 0:07:08.339 ******* 2026-03-25 05:15:17.683590 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:15:17.683601 | orchestrator | 2026-03-25 05:15:17.683611 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 05:15:17.683638 | orchestrator | Wednesday 25 March 2026 05:14:52 +0000 (0:00:01.615) 0:07:09.955 ******* 2026-03-25 05:15:17.683649 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:15:17.683660 | orchestrator | 2026-03-25 05:15:17.683693 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 05:15:17.683705 | orchestrator | Wednesday 25 March 2026 05:14:54 +0000 (0:00:01.549) 0:07:11.504 ******* 2026-03-25 05:15:17.683716 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.683727 | orchestrator | 2026-03-25 05:15:17.683737 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:15:17.683748 | orchestrator | Wednesday 25 March 2026 05:14:55 +0000 (0:00:01.170) 0:07:12.675 ******* 2026-03-25 05:15:17.683760 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:15:17.683779 | orchestrator | 2026-03-25 05:15:17.683797 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:15:17.683841 | orchestrator | Wednesday 25 March 2026 05:14:56 +0000 (0:00:01.189) 0:07:13.864 ******* 2026-03-25 05:15:17.683860 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.683878 | orchestrator | 2026-03-25 05:15:17.683896 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:15:17.683914 | orchestrator | Wednesday 25 March 2026 05:14:58 +0000 (0:00:01.215) 0:07:15.080 ******* 2026-03-25 05:15:17.683933 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.683950 | orchestrator | 2026-03-25 05:15:17.683968 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:15:17.683986 | orchestrator | Wednesday 25 March 2026 05:14:59 +0000 (0:00:01.157) 0:07:16.238 ******* 2026-03-25 05:15:17.684004 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684022 | orchestrator | 2026-03-25 05:15:17.684039 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:15:17.684057 | orchestrator | Wednesday 25 March 2026 05:15:00 +0000 (0:00:01.133) 0:07:17.371 ******* 2026-03-25 05:15:17.684076 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684095 | orchestrator | 2026-03-25 05:15:17.684113 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:15:17.684131 | orchestrator | Wednesday 25 March 2026 05:15:01 +0000 (0:00:01.181) 0:07:18.553 ******* 2026-03-25 05:15:17.684151 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684169 | orchestrator | 2026-03-25 05:15:17.684188 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:15:17.684203 | orchestrator | Wednesday 25 March 2026 05:15:02 +0000 (0:00:01.120) 0:07:19.674 ******* 2026-03-25 05:15:17.684214 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:15:17.684225 | orchestrator | 2026-03-25 05:15:17.684235 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:15:17.684246 | orchestrator | Wednesday 25 March 2026 05:15:03 +0000 (0:00:01.227) 0:07:20.902 ******* 2026-03-25 05:15:17.684267 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:15:17.684278 | orchestrator | 2026-03-25 05:15:17.684289 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:15:17.684299 | orchestrator | Wednesday 25 March 2026 05:15:05 +0000 (0:00:01.154) 0:07:22.057 ******* 2026-03-25 05:15:17.684310 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:15:17.684321 | orchestrator | 2026-03-25 05:15:17.684331 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:15:17.684342 | orchestrator | Wednesday 25 March 2026 05:15:06 +0000 (0:00:01.183) 0:07:23.240 ******* 2026-03-25 05:15:17.684352 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684363 | orchestrator | 2026-03-25 05:15:17.684374 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:15:17.684384 | orchestrator | Wednesday 25 March 2026 05:15:07 +0000 (0:00:01.089) 0:07:24.330 ******* 2026-03-25 05:15:17.684395 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684406 | orchestrator | 2026-03-25 05:15:17.684416 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:15:17.684427 | orchestrator | Wednesday 25 March 2026 05:15:08 +0000 (0:00:01.140) 0:07:25.471 ******* 2026-03-25 05:15:17.684438 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684448 | orchestrator | 2026-03-25 05:15:17.684459 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:15:17.684470 | orchestrator | Wednesday 25 March 2026 05:15:09 +0000 (0:00:01.179) 0:07:26.651 ******* 2026-03-25 05:15:17.684481 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684491 | orchestrator | 2026-03-25 05:15:17.684502 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:15:17.684513 | orchestrator | Wednesday 25 March 2026 05:15:10 +0000 (0:00:01.192) 0:07:27.843 ******* 2026-03-25 05:15:17.684523 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684543 | orchestrator | 2026-03-25 05:15:17.684562 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:15:17.684599 | orchestrator | Wednesday 25 March 2026 05:15:11 +0000 (0:00:01.131) 0:07:28.975 ******* 2026-03-25 05:15:17.684634 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684653 | orchestrator | 2026-03-25 05:15:17.684669 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:15:17.684680 | orchestrator | Wednesday 25 March 2026 05:15:13 +0000 (0:00:01.138) 0:07:30.114 ******* 2026-03-25 05:15:17.684690 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684701 | orchestrator | 2026-03-25 05:15:17.684712 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:15:17.684723 | orchestrator | Wednesday 25 March 2026 05:15:14 +0000 (0:00:01.130) 0:07:31.244 ******* 2026-03-25 05:15:17.684733 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684744 | orchestrator | 2026-03-25 05:15:17.684755 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:15:17.684773 | orchestrator | Wednesday 25 March 2026 05:15:15 +0000 (0:00:01.204) 0:07:32.449 ******* 2026-03-25 05:15:17.684790 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684807 | orchestrator | 2026-03-25 05:15:17.684852 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:15:17.684871 | orchestrator | Wednesday 25 March 2026 05:15:16 +0000 (0:00:01.109) 0:07:33.558 ******* 2026-03-25 05:15:17.684889 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:15:17.684900 | orchestrator | 2026-03-25 05:15:17.684920 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:15:17.684931 | orchestrator | Wednesday 25 March 2026 05:15:17 +0000 (0:00:01.120) 0:07:34.679 ******* 2026-03-25 05:16:09.886965 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.887111 | orchestrator | 2026-03-25 05:16:09.887141 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:16:09.887162 | orchestrator | Wednesday 25 March 2026 05:15:18 +0000 (0:00:01.146) 0:07:35.826 ******* 2026-03-25 05:16:09.887214 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.887234 | orchestrator | 2026-03-25 05:16:09.887252 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:16:09.887269 | orchestrator | Wednesday 25 March 2026 05:15:19 +0000 (0:00:01.184) 0:07:37.010 ******* 2026-03-25 05:16:09.887286 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:09.887307 | orchestrator | 2026-03-25 05:16:09.887326 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:16:09.887345 | orchestrator | Wednesday 25 March 2026 05:15:21 +0000 (0:00:01.974) 0:07:38.985 ******* 2026-03-25 05:16:09.887364 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:09.887382 | orchestrator | 2026-03-25 05:16:09.887399 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:16:09.887412 | orchestrator | Wednesday 25 March 2026 05:15:24 +0000 (0:00:02.511) 0:07:41.496 ******* 2026-03-25 05:16:09.887424 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-25 05:16:09.887438 | orchestrator | 2026-03-25 05:16:09.887450 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 05:16:09.887462 | orchestrator | Wednesday 25 March 2026 05:15:25 +0000 (0:00:01.500) 0:07:42.996 ******* 2026-03-25 05:16:09.887475 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.887487 | orchestrator | 2026-03-25 05:16:09.887499 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 05:16:09.887511 | orchestrator | Wednesday 25 March 2026 05:15:27 +0000 (0:00:01.129) 0:07:44.126 ******* 2026-03-25 05:16:09.887523 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.887536 | orchestrator | 2026-03-25 05:16:09.887548 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 05:16:09.887560 | orchestrator | Wednesday 25 March 2026 05:15:28 +0000 (0:00:01.133) 0:07:45.259 ******* 2026-03-25 05:16:09.887573 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 05:16:09.887585 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 05:16:09.887598 | orchestrator | 2026-03-25 05:16:09.887610 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 05:16:09.887623 | orchestrator | Wednesday 25 March 2026 05:15:30 +0000 (0:00:01.860) 0:07:47.120 ******* 2026-03-25 05:16:09.887635 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:09.887647 | orchestrator | 2026-03-25 05:16:09.887660 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 05:16:09.887672 | orchestrator | Wednesday 25 March 2026 05:15:31 +0000 (0:00:01.666) 0:07:48.787 ******* 2026-03-25 05:16:09.887684 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.887696 | orchestrator | 2026-03-25 05:16:09.887709 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 05:16:09.887722 | orchestrator | Wednesday 25 March 2026 05:15:32 +0000 (0:00:01.207) 0:07:49.995 ******* 2026-03-25 05:16:09.887734 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.887747 | orchestrator | 2026-03-25 05:16:09.887758 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:16:09.887769 | orchestrator | Wednesday 25 March 2026 05:15:34 +0000 (0:00:01.221) 0:07:51.216 ******* 2026-03-25 05:16:09.887779 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.887790 | orchestrator | 2026-03-25 05:16:09.887801 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:16:09.887812 | orchestrator | Wednesday 25 March 2026 05:15:35 +0000 (0:00:01.197) 0:07:52.414 ******* 2026-03-25 05:16:09.887848 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-25 05:16:09.887860 | orchestrator | 2026-03-25 05:16:09.887870 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 05:16:09.887881 | orchestrator | Wednesday 25 March 2026 05:15:36 +0000 (0:00:01.490) 0:07:53.904 ******* 2026-03-25 05:16:09.887902 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:09.887913 | orchestrator | 2026-03-25 05:16:09.887924 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 05:16:09.887935 | orchestrator | Wednesday 25 March 2026 05:15:38 +0000 (0:00:01.760) 0:07:55.665 ******* 2026-03-25 05:16:09.887946 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 05:16:09.887956 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 05:16:09.887967 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 05:16:09.887978 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.887988 | orchestrator | 2026-03-25 05:16:09.887999 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 05:16:09.888009 | orchestrator | Wednesday 25 March 2026 05:15:39 +0000 (0:00:01.168) 0:07:56.833 ******* 2026-03-25 05:16:09.888020 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888031 | orchestrator | 2026-03-25 05:16:09.888041 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 05:16:09.888052 | orchestrator | Wednesday 25 March 2026 05:15:40 +0000 (0:00:01.146) 0:07:57.980 ******* 2026-03-25 05:16:09.888063 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888073 | orchestrator | 2026-03-25 05:16:09.888084 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 05:16:09.888095 | orchestrator | Wednesday 25 March 2026 05:15:42 +0000 (0:00:01.174) 0:07:59.154 ******* 2026-03-25 05:16:09.888106 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888116 | orchestrator | 2026-03-25 05:16:09.888144 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 05:16:09.888177 | orchestrator | Wednesday 25 March 2026 05:15:43 +0000 (0:00:01.120) 0:08:00.275 ******* 2026-03-25 05:16:09.888188 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888199 | orchestrator | 2026-03-25 05:16:09.888210 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 05:16:09.888221 | orchestrator | Wednesday 25 March 2026 05:15:44 +0000 (0:00:01.178) 0:08:01.454 ******* 2026-03-25 05:16:09.888232 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888242 | orchestrator | 2026-03-25 05:16:09.888253 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:16:09.888264 | orchestrator | Wednesday 25 March 2026 05:15:45 +0000 (0:00:01.124) 0:08:02.578 ******* 2026-03-25 05:16:09.888275 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:09.888285 | orchestrator | 2026-03-25 05:16:09.888296 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:16:09.888307 | orchestrator | Wednesday 25 March 2026 05:15:48 +0000 (0:00:02.558) 0:08:05.137 ******* 2026-03-25 05:16:09.888318 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:09.888328 | orchestrator | 2026-03-25 05:16:09.888339 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:16:09.888350 | orchestrator | Wednesday 25 March 2026 05:15:49 +0000 (0:00:01.170) 0:08:06.307 ******* 2026-03-25 05:16:09.888361 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-25 05:16:09.888371 | orchestrator | 2026-03-25 05:16:09.888382 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 05:16:09.888393 | orchestrator | Wednesday 25 March 2026 05:15:50 +0000 (0:00:01.541) 0:08:07.849 ******* 2026-03-25 05:16:09.888404 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888414 | orchestrator | 2026-03-25 05:16:09.888425 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 05:16:09.888436 | orchestrator | Wednesday 25 March 2026 05:15:52 +0000 (0:00:01.177) 0:08:09.026 ******* 2026-03-25 05:16:09.888447 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888458 | orchestrator | 2026-03-25 05:16:09.888468 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 05:16:09.888486 | orchestrator | Wednesday 25 March 2026 05:15:53 +0000 (0:00:01.175) 0:08:10.202 ******* 2026-03-25 05:16:09.888497 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888507 | orchestrator | 2026-03-25 05:16:09.888518 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 05:16:09.888529 | orchestrator | Wednesday 25 March 2026 05:15:54 +0000 (0:00:01.206) 0:08:11.409 ******* 2026-03-25 05:16:09.888539 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888550 | orchestrator | 2026-03-25 05:16:09.888561 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 05:16:09.888572 | orchestrator | Wednesday 25 March 2026 05:15:55 +0000 (0:00:01.186) 0:08:12.595 ******* 2026-03-25 05:16:09.888583 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888593 | orchestrator | 2026-03-25 05:16:09.888604 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 05:16:09.888615 | orchestrator | Wednesday 25 March 2026 05:15:56 +0000 (0:00:01.175) 0:08:13.770 ******* 2026-03-25 05:16:09.888625 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888636 | orchestrator | 2026-03-25 05:16:09.888647 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 05:16:09.888657 | orchestrator | Wednesday 25 March 2026 05:15:57 +0000 (0:00:01.142) 0:08:14.913 ******* 2026-03-25 05:16:09.888668 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888679 | orchestrator | 2026-03-25 05:16:09.888690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 05:16:09.888700 | orchestrator | Wednesday 25 March 2026 05:15:59 +0000 (0:00:01.184) 0:08:16.097 ******* 2026-03-25 05:16:09.888711 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:09.888722 | orchestrator | 2026-03-25 05:16:09.888732 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 05:16:09.888743 | orchestrator | Wednesday 25 March 2026 05:16:00 +0000 (0:00:01.226) 0:08:17.323 ******* 2026-03-25 05:16:09.888753 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:09.888764 | orchestrator | 2026-03-25 05:16:09.888775 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:16:09.888786 | orchestrator | Wednesday 25 March 2026 05:16:01 +0000 (0:00:01.163) 0:08:18.486 ******* 2026-03-25 05:16:09.888796 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-25 05:16:09.888807 | orchestrator | 2026-03-25 05:16:09.888835 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 05:16:09.888847 | orchestrator | Wednesday 25 March 2026 05:16:02 +0000 (0:00:01.466) 0:08:19.953 ******* 2026-03-25 05:16:09.888857 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-25 05:16:09.888869 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-25 05:16:09.888879 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-25 05:16:09.888890 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-25 05:16:09.888901 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-25 05:16:09.888912 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-25 05:16:09.888923 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-25 05:16:09.888934 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-25 05:16:09.888945 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 05:16:09.888956 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 05:16:09.888966 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 05:16:09.888977 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 05:16:09.888988 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 05:16:09.889004 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 05:16:09.889021 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-25 05:16:58.796722 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-25 05:16:58.796898 | orchestrator | 2026-03-25 05:16:58.796917 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:16:58.796931 | orchestrator | Wednesday 25 March 2026 05:16:09 +0000 (0:00:06.924) 0:08:26.877 ******* 2026-03-25 05:16:58.796942 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.796954 | orchestrator | 2026-03-25 05:16:58.796965 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:16:58.796976 | orchestrator | Wednesday 25 March 2026 05:16:11 +0000 (0:00:01.233) 0:08:28.111 ******* 2026-03-25 05:16:58.796987 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.796998 | orchestrator | 2026-03-25 05:16:58.797008 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:16:58.797019 | orchestrator | Wednesday 25 March 2026 05:16:12 +0000 (0:00:01.184) 0:08:29.295 ******* 2026-03-25 05:16:58.797030 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797040 | orchestrator | 2026-03-25 05:16:58.797051 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:16:58.797062 | orchestrator | Wednesday 25 March 2026 05:16:13 +0000 (0:00:01.138) 0:08:30.434 ******* 2026-03-25 05:16:58.797072 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797083 | orchestrator | 2026-03-25 05:16:58.797093 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:16:58.797104 | orchestrator | Wednesday 25 March 2026 05:16:14 +0000 (0:00:01.200) 0:08:31.634 ******* 2026-03-25 05:16:58.797115 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797125 | orchestrator | 2026-03-25 05:16:58.797136 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:16:58.797146 | orchestrator | Wednesday 25 March 2026 05:16:15 +0000 (0:00:01.157) 0:08:32.792 ******* 2026-03-25 05:16:58.797157 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797168 | orchestrator | 2026-03-25 05:16:58.797179 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:16:58.797191 | orchestrator | Wednesday 25 March 2026 05:16:16 +0000 (0:00:01.147) 0:08:33.940 ******* 2026-03-25 05:16:58.797201 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797212 | orchestrator | 2026-03-25 05:16:58.797223 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:16:58.797233 | orchestrator | Wednesday 25 March 2026 05:16:18 +0000 (0:00:01.213) 0:08:35.153 ******* 2026-03-25 05:16:58.797246 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797258 | orchestrator | 2026-03-25 05:16:58.797270 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:16:58.797283 | orchestrator | Wednesday 25 March 2026 05:16:19 +0000 (0:00:01.177) 0:08:36.331 ******* 2026-03-25 05:16:58.797295 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797306 | orchestrator | 2026-03-25 05:16:58.797318 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:16:58.797330 | orchestrator | Wednesday 25 March 2026 05:16:20 +0000 (0:00:01.217) 0:08:37.549 ******* 2026-03-25 05:16:58.797343 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797355 | orchestrator | 2026-03-25 05:16:58.797367 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:16:58.797379 | orchestrator | Wednesday 25 March 2026 05:16:21 +0000 (0:00:01.148) 0:08:38.697 ******* 2026-03-25 05:16:58.797392 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797404 | orchestrator | 2026-03-25 05:16:58.797416 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:16:58.797428 | orchestrator | Wednesday 25 March 2026 05:16:22 +0000 (0:00:01.157) 0:08:39.854 ******* 2026-03-25 05:16:58.797441 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797453 | orchestrator | 2026-03-25 05:16:58.797465 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:16:58.797505 | orchestrator | Wednesday 25 March 2026 05:16:24 +0000 (0:00:01.183) 0:08:41.038 ******* 2026-03-25 05:16:58.797524 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797549 | orchestrator | 2026-03-25 05:16:58.797574 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:16:58.797592 | orchestrator | Wednesday 25 March 2026 05:16:25 +0000 (0:00:01.277) 0:08:42.315 ******* 2026-03-25 05:16:58.797610 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797628 | orchestrator | 2026-03-25 05:16:58.797643 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:16:58.797658 | orchestrator | Wednesday 25 March 2026 05:16:26 +0000 (0:00:01.152) 0:08:43.468 ******* 2026-03-25 05:16:58.797673 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797691 | orchestrator | 2026-03-25 05:16:58.797708 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:16:58.797725 | orchestrator | Wednesday 25 March 2026 05:16:27 +0000 (0:00:01.228) 0:08:44.697 ******* 2026-03-25 05:16:58.797742 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797760 | orchestrator | 2026-03-25 05:16:58.797778 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:16:58.797797 | orchestrator | Wednesday 25 March 2026 05:16:28 +0000 (0:00:01.134) 0:08:45.831 ******* 2026-03-25 05:16:58.797816 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797875 | orchestrator | 2026-03-25 05:16:58.797890 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:16:58.797902 | orchestrator | Wednesday 25 March 2026 05:16:29 +0000 (0:00:01.135) 0:08:46.966 ******* 2026-03-25 05:16:58.797914 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797925 | orchestrator | 2026-03-25 05:16:58.797935 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:16:58.797963 | orchestrator | Wednesday 25 March 2026 05:16:31 +0000 (0:00:01.222) 0:08:48.189 ******* 2026-03-25 05:16:58.797975 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.797985 | orchestrator | 2026-03-25 05:16:58.798081 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:16:58.798109 | orchestrator | Wednesday 25 March 2026 05:16:32 +0000 (0:00:01.131) 0:08:49.320 ******* 2026-03-25 05:16:58.798136 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.798153 | orchestrator | 2026-03-25 05:16:58.798182 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:16:58.798199 | orchestrator | Wednesday 25 March 2026 05:16:33 +0000 (0:00:01.157) 0:08:50.478 ******* 2026-03-25 05:16:58.798216 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.798233 | orchestrator | 2026-03-25 05:16:58.798250 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:16:58.798267 | orchestrator | Wednesday 25 March 2026 05:16:34 +0000 (0:00:01.193) 0:08:51.672 ******* 2026-03-25 05:16:58.798285 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:16:58.798303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 05:16:58.798320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:16:58.798339 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.798357 | orchestrator | 2026-03-25 05:16:58.798376 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:16:58.798394 | orchestrator | Wednesday 25 March 2026 05:16:36 +0000 (0:00:01.825) 0:08:53.497 ******* 2026-03-25 05:16:58.798411 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:16:58.798423 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 05:16:58.798433 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:16:58.798444 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.798454 | orchestrator | 2026-03-25 05:16:58.798465 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:16:58.798490 | orchestrator | Wednesday 25 March 2026 05:16:37 +0000 (0:00:01.448) 0:08:54.946 ******* 2026-03-25 05:16:58.798501 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:16:58.798511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 05:16:58.798522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:16:58.798533 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.798543 | orchestrator | 2026-03-25 05:16:58.798553 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:16:58.798564 | orchestrator | Wednesday 25 March 2026 05:16:39 +0000 (0:00:01.472) 0:08:56.419 ******* 2026-03-25 05:16:58.798575 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.798585 | orchestrator | 2026-03-25 05:16:58.798596 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:16:58.798607 | orchestrator | Wednesday 25 March 2026 05:16:40 +0000 (0:00:01.158) 0:08:57.578 ******* 2026-03-25 05:16:58.798617 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-25 05:16:58.798628 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.798638 | orchestrator | 2026-03-25 05:16:58.798649 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:16:58.798660 | orchestrator | Wednesday 25 March 2026 05:16:41 +0000 (0:00:01.381) 0:08:58.959 ******* 2026-03-25 05:16:58.798670 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:58.798681 | orchestrator | 2026-03-25 05:16:58.798692 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:16:58.798707 | orchestrator | Wednesday 25 March 2026 05:16:43 +0000 (0:00:01.766) 0:09:00.726 ******* 2026-03-25 05:16:58.798725 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:58.798752 | orchestrator | 2026-03-25 05:16:58.798772 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-25 05:16:58.798789 | orchestrator | Wednesday 25 March 2026 05:16:44 +0000 (0:00:01.164) 0:09:01.890 ******* 2026-03-25 05:16:58.798807 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-03-25 05:16:58.798825 | orchestrator | 2026-03-25 05:16:58.798872 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-25 05:16:58.798891 | orchestrator | Wednesday 25 March 2026 05:16:46 +0000 (0:00:01.539) 0:09:03.430 ******* 2026-03-25 05:16:58.798910 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-25 05:16:58.798927 | orchestrator | 2026-03-25 05:16:58.798944 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-25 05:16:58.798962 | orchestrator | Wednesday 25 March 2026 05:16:49 +0000 (0:00:03.511) 0:09:06.942 ******* 2026-03-25 05:16:58.798981 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:16:58.798999 | orchestrator | 2026-03-25 05:16:58.799017 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-25 05:16:58.799034 | orchestrator | Wednesday 25 March 2026 05:16:51 +0000 (0:00:01.197) 0:09:08.139 ******* 2026-03-25 05:16:58.799052 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:58.799071 | orchestrator | 2026-03-25 05:16:58.799089 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-25 05:16:58.799107 | orchestrator | Wednesday 25 March 2026 05:16:52 +0000 (0:00:01.146) 0:09:09.286 ******* 2026-03-25 05:16:58.799125 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:58.799143 | orchestrator | 2026-03-25 05:16:58.799160 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-25 05:16:58.799177 | orchestrator | Wednesday 25 March 2026 05:16:53 +0000 (0:00:01.151) 0:09:10.438 ******* 2026-03-25 05:16:58.799195 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:16:58.799214 | orchestrator | 2026-03-25 05:16:58.799232 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-25 05:16:58.799250 | orchestrator | Wednesday 25 March 2026 05:16:55 +0000 (0:00:02.221) 0:09:12.659 ******* 2026-03-25 05:16:58.799268 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:58.799299 | orchestrator | 2026-03-25 05:16:58.799318 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-25 05:16:58.799347 | orchestrator | Wednesday 25 March 2026 05:16:57 +0000 (0:00:01.624) 0:09:14.283 ******* 2026-03-25 05:16:58.799365 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:16:58.799383 | orchestrator | 2026-03-25 05:16:58.799417 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-25 05:17:56.760433 | orchestrator | Wednesday 25 March 2026 05:16:58 +0000 (0:00:01.513) 0:09:15.797 ******* 2026-03-25 05:17:56.760559 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.760606 | orchestrator | 2026-03-25 05:17:56.760629 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-25 05:17:56.760661 | orchestrator | Wednesday 25 March 2026 05:17:00 +0000 (0:00:01.539) 0:09:17.337 ******* 2026-03-25 05:17:56.760680 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.760698 | orchestrator | 2026-03-25 05:17:56.760716 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-25 05:17:56.760735 | orchestrator | Wednesday 25 March 2026 05:17:02 +0000 (0:00:01.769) 0:09:19.107 ******* 2026-03-25 05:17:56.760753 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.760771 | orchestrator | 2026-03-25 05:17:56.760790 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-25 05:17:56.760808 | orchestrator | Wednesday 25 March 2026 05:17:03 +0000 (0:00:01.713) 0:09:20.820 ******* 2026-03-25 05:17:56.760826 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-25 05:17:56.760846 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 05:17:56.760896 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 05:17:56.760916 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-25 05:17:56.760934 | orchestrator | 2026-03-25 05:17:56.760952 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-25 05:17:56.760971 | orchestrator | Wednesday 25 March 2026 05:17:07 +0000 (0:00:03.902) 0:09:24.723 ******* 2026-03-25 05:17:56.760989 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:17:56.761008 | orchestrator | 2026-03-25 05:17:56.761026 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-25 05:17:56.761045 | orchestrator | Wednesday 25 March 2026 05:17:09 +0000 (0:00:02.095) 0:09:26.819 ******* 2026-03-25 05:17:56.761062 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.761081 | orchestrator | 2026-03-25 05:17:56.761101 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-25 05:17:56.761121 | orchestrator | Wednesday 25 March 2026 05:17:10 +0000 (0:00:01.145) 0:09:27.964 ******* 2026-03-25 05:17:56.761141 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.761161 | orchestrator | 2026-03-25 05:17:56.761180 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-25 05:17:56.761201 | orchestrator | Wednesday 25 March 2026 05:17:12 +0000 (0:00:01.162) 0:09:29.126 ******* 2026-03-25 05:17:56.761221 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.761241 | orchestrator | 2026-03-25 05:17:56.761298 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-25 05:17:56.761319 | orchestrator | Wednesday 25 March 2026 05:17:14 +0000 (0:00:02.109) 0:09:31.236 ******* 2026-03-25 05:17:56.761337 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.761355 | orchestrator | 2026-03-25 05:17:56.761374 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-25 05:17:56.761392 | orchestrator | Wednesday 25 March 2026 05:17:15 +0000 (0:00:01.520) 0:09:32.757 ******* 2026-03-25 05:17:56.761410 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:17:56.761428 | orchestrator | 2026-03-25 05:17:56.761445 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-25 05:17:56.761464 | orchestrator | Wednesday 25 March 2026 05:17:16 +0000 (0:00:01.137) 0:09:33.894 ******* 2026-03-25 05:17:56.761483 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-03-25 05:17:56.761535 | orchestrator | 2026-03-25 05:17:56.761601 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-25 05:17:56.761623 | orchestrator | Wednesday 25 March 2026 05:17:18 +0000 (0:00:01.473) 0:09:35.368 ******* 2026-03-25 05:17:56.761641 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:17:56.761659 | orchestrator | 2026-03-25 05:17:56.761678 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-25 05:17:56.761698 | orchestrator | Wednesday 25 March 2026 05:17:19 +0000 (0:00:01.113) 0:09:36.481 ******* 2026-03-25 05:17:56.761717 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:17:56.761731 | orchestrator | 2026-03-25 05:17:56.761742 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-25 05:17:56.761752 | orchestrator | Wednesday 25 March 2026 05:17:20 +0000 (0:00:01.096) 0:09:37.578 ******* 2026-03-25 05:17:56.761763 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-03-25 05:17:56.761774 | orchestrator | 2026-03-25 05:17:56.761785 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-25 05:17:56.761795 | orchestrator | Wednesday 25 March 2026 05:17:22 +0000 (0:00:01.484) 0:09:39.062 ******* 2026-03-25 05:17:56.761806 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.761817 | orchestrator | 2026-03-25 05:17:56.761828 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-25 05:17:56.761838 | orchestrator | Wednesday 25 March 2026 05:17:24 +0000 (0:00:02.284) 0:09:41.347 ******* 2026-03-25 05:17:56.761924 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.761943 | orchestrator | 2026-03-25 05:17:56.761954 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-25 05:17:56.761965 | orchestrator | Wednesday 25 March 2026 05:17:26 +0000 (0:00:02.034) 0:09:43.382 ******* 2026-03-25 05:17:56.761975 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.761986 | orchestrator | 2026-03-25 05:17:56.761997 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-25 05:17:56.762008 | orchestrator | Wednesday 25 March 2026 05:17:28 +0000 (0:00:02.418) 0:09:45.801 ******* 2026-03-25 05:17:56.762083 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:17:56.762098 | orchestrator | 2026-03-25 05:17:56.762109 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-25 05:17:56.762120 | orchestrator | Wednesday 25 March 2026 05:17:31 +0000 (0:00:03.209) 0:09:49.010 ******* 2026-03-25 05:17:56.762184 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-03-25 05:17:56.762197 | orchestrator | 2026-03-25 05:17:56.762234 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-25 05:17:56.762246 | orchestrator | Wednesday 25 March 2026 05:17:33 +0000 (0:00:01.642) 0:09:50.653 ******* 2026-03-25 05:17:56.762257 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.762267 | orchestrator | 2026-03-25 05:17:56.762278 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-25 05:17:56.762290 | orchestrator | Wednesday 25 March 2026 05:17:35 +0000 (0:00:02.300) 0:09:52.954 ******* 2026-03-25 05:17:56.762301 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:17:56.762311 | orchestrator | 2026-03-25 05:17:56.762322 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-25 05:17:56.762333 | orchestrator | Wednesday 25 March 2026 05:17:38 +0000 (0:00:03.062) 0:09:56.016 ******* 2026-03-25 05:17:56.762344 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:17:56.762354 | orchestrator | 2026-03-25 05:17:56.762365 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-25 05:17:56.762376 | orchestrator | Wednesday 25 March 2026 05:17:40 +0000 (0:00:01.201) 0:09:57.218 ******* 2026-03-25 05:17:56.762389 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-25 05:17:56.762417 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-25 05:17:56.762429 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-25 05:17:56.762440 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-25 05:17:56.762452 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-25 05:17:56.762464 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}])  2026-03-25 05:17:56.762477 | orchestrator | 2026-03-25 05:17:56.762488 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-25 05:17:56.762499 | orchestrator | Wednesday 25 March 2026 05:17:50 +0000 (0:00:10.478) 0:10:07.696 ******* 2026-03-25 05:17:56.762510 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:17:56.762521 | orchestrator | 2026-03-25 05:17:56.762531 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:17:56.762542 | orchestrator | Wednesday 25 March 2026 05:17:53 +0000 (0:00:02.481) 0:10:10.178 ******* 2026-03-25 05:17:56.762553 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:17:56.762564 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-25 05:17:56.762575 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-25 05:17:56.762585 | orchestrator | 2026-03-25 05:17:56.762596 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:17:56.762607 | orchestrator | Wednesday 25 March 2026 05:17:55 +0000 (0:00:02.172) 0:10:12.350 ******* 2026-03-25 05:17:56.762618 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:17:56.762629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:17:56.762639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:17:56.762650 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:17:56.762661 | orchestrator | 2026-03-25 05:17:56.762677 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-25 05:17:56.762694 | orchestrator | Wednesday 25 March 2026 05:17:56 +0000 (0:00:01.405) 0:10:13.756 ******* 2026-03-25 05:18:26.417703 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:18:26.417821 | orchestrator | 2026-03-25 05:18:26.417838 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-25 05:18:26.417915 | orchestrator | Wednesday 25 March 2026 05:17:57 +0000 (0:00:01.176) 0:10:14.932 ******* 2026-03-25 05:18:26.417930 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:18:26.417942 | orchestrator | 2026-03-25 05:18:26.417953 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-25 05:18:26.417964 | orchestrator | 2026-03-25 05:18:26.417975 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-25 05:18:26.417986 | orchestrator | Wednesday 25 March 2026 05:18:00 +0000 (0:00:02.353) 0:10:17.285 ******* 2026-03-25 05:18:26.417996 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418007 | orchestrator | 2026-03-25 05:18:26.418091 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-25 05:18:26.418106 | orchestrator | Wednesday 25 March 2026 05:18:01 +0000 (0:00:01.144) 0:10:18.430 ******* 2026-03-25 05:18:26.418117 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418127 | orchestrator | 2026-03-25 05:18:26.418138 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-25 05:18:26.418150 | orchestrator | Wednesday 25 March 2026 05:18:02 +0000 (0:00:00.799) 0:10:19.229 ******* 2026-03-25 05:18:26.418161 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:26.418172 | orchestrator | 2026-03-25 05:18:26.418183 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-25 05:18:26.418194 | orchestrator | Wednesday 25 March 2026 05:18:03 +0000 (0:00:00.799) 0:10:20.028 ******* 2026-03-25 05:18:26.418204 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418215 | orchestrator | 2026-03-25 05:18:26.418226 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:18:26.418239 | orchestrator | Wednesday 25 March 2026 05:18:03 +0000 (0:00:00.807) 0:10:20.836 ******* 2026-03-25 05:18:26.418252 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-25 05:18:26.418264 | orchestrator | 2026-03-25 05:18:26.418276 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:18:26.418288 | orchestrator | Wednesday 25 March 2026 05:18:04 +0000 (0:00:01.119) 0:10:21.956 ******* 2026-03-25 05:18:26.418301 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418313 | orchestrator | 2026-03-25 05:18:26.418325 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:18:26.418337 | orchestrator | Wednesday 25 March 2026 05:18:06 +0000 (0:00:01.520) 0:10:23.477 ******* 2026-03-25 05:18:26.418349 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418361 | orchestrator | 2026-03-25 05:18:26.418373 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:18:26.418385 | orchestrator | Wednesday 25 March 2026 05:18:07 +0000 (0:00:01.127) 0:10:24.604 ******* 2026-03-25 05:18:26.418397 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418409 | orchestrator | 2026-03-25 05:18:26.418422 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:18:26.418434 | orchestrator | Wednesday 25 March 2026 05:18:09 +0000 (0:00:01.575) 0:10:26.180 ******* 2026-03-25 05:18:26.418446 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418459 | orchestrator | 2026-03-25 05:18:26.418471 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:18:26.418483 | orchestrator | Wednesday 25 March 2026 05:18:10 +0000 (0:00:01.126) 0:10:27.306 ******* 2026-03-25 05:18:26.418495 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418507 | orchestrator | 2026-03-25 05:18:26.418520 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:18:26.418532 | orchestrator | Wednesday 25 March 2026 05:18:11 +0000 (0:00:01.127) 0:10:28.433 ******* 2026-03-25 05:18:26.418545 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418557 | orchestrator | 2026-03-25 05:18:26.418569 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:18:26.418582 | orchestrator | Wednesday 25 March 2026 05:18:12 +0000 (0:00:01.193) 0:10:29.627 ******* 2026-03-25 05:18:26.418592 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:26.418613 | orchestrator | 2026-03-25 05:18:26.418624 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:18:26.418635 | orchestrator | Wednesday 25 March 2026 05:18:13 +0000 (0:00:01.164) 0:10:30.792 ******* 2026-03-25 05:18:26.418646 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418656 | orchestrator | 2026-03-25 05:18:26.418667 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:18:26.418678 | orchestrator | Wednesday 25 March 2026 05:18:14 +0000 (0:00:01.219) 0:10:32.012 ******* 2026-03-25 05:18:26.418688 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:18:26.418699 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:18:26.418710 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:18:26.418721 | orchestrator | 2026-03-25 05:18:26.418732 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:18:26.418742 | orchestrator | Wednesday 25 March 2026 05:18:16 +0000 (0:00:01.710) 0:10:33.722 ******* 2026-03-25 05:18:26.418753 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:26.418764 | orchestrator | 2026-03-25 05:18:26.418774 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:18:26.418785 | orchestrator | Wednesday 25 March 2026 05:18:17 +0000 (0:00:01.248) 0:10:34.971 ******* 2026-03-25 05:18:26.418796 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:18:26.418806 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:18:26.418817 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:18:26.418828 | orchestrator | 2026-03-25 05:18:26.418854 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:18:26.418965 | orchestrator | Wednesday 25 March 2026 05:18:20 +0000 (0:00:02.880) 0:10:37.851 ******* 2026-03-25 05:18:26.418998 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 05:18:26.419009 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 05:18:26.419020 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 05:18:26.419031 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:26.419041 | orchestrator | 2026-03-25 05:18:26.419052 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:18:26.419062 | orchestrator | Wednesday 25 March 2026 05:18:22 +0000 (0:00:01.502) 0:10:39.354 ******* 2026-03-25 05:18:26.419075 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:18:26.419089 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:18:26.419100 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:18:26.419111 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:26.419122 | orchestrator | 2026-03-25 05:18:26.419132 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:18:26.419143 | orchestrator | Wednesday 25 March 2026 05:18:24 +0000 (0:00:01.664) 0:10:41.019 ******* 2026-03-25 05:18:26.419156 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:26.419179 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:26.419190 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:26.419201 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:26.419212 | orchestrator | 2026-03-25 05:18:26.419223 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:18:26.419233 | orchestrator | Wednesday 25 March 2026 05:18:25 +0000 (0:00:01.195) 0:10:42.215 ******* 2026-03-25 05:18:26.419247 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:18:18.510820', 'end': '2026-03-25 05:18:18.563247', 'delta': '0:00:00.052427', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:18:26.419275 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'cb4e3d9a68a8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:18:19.063238', 'end': '2026-03-25 05:18:19.114059', 'delta': '0:00:00.050821', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cb4e3d9a68a8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:18:45.362386 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '90e526f29e10', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:18:19.613678', 'end': '2026-03-25 05:18:19.656099', 'delta': '0:00:00.042421', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90e526f29e10'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:18:45.362491 | orchestrator | 2026-03-25 05:18:45.362504 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:18:45.362513 | orchestrator | Wednesday 25 March 2026 05:18:26 +0000 (0:00:01.207) 0:10:43.422 ******* 2026-03-25 05:18:45.362520 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:45.362528 | orchestrator | 2026-03-25 05:18:45.362535 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:18:45.362562 | orchestrator | Wednesday 25 March 2026 05:18:27 +0000 (0:00:01.386) 0:10:44.809 ******* 2026-03-25 05:18:45.362571 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362579 | orchestrator | 2026-03-25 05:18:45.362585 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:18:45.362591 | orchestrator | Wednesday 25 March 2026 05:18:29 +0000 (0:00:01.342) 0:10:46.152 ******* 2026-03-25 05:18:45.362598 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:45.362605 | orchestrator | 2026-03-25 05:18:45.362612 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:18:45.362618 | orchestrator | Wednesday 25 March 2026 05:18:30 +0000 (0:00:01.155) 0:10:47.308 ******* 2026-03-25 05:18:45.362625 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:18:45.362632 | orchestrator | 2026-03-25 05:18:45.362639 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:18:45.362646 | orchestrator | Wednesday 25 March 2026 05:18:32 +0000 (0:00:02.033) 0:10:49.341 ******* 2026-03-25 05:18:45.362652 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:18:45.362659 | orchestrator | 2026-03-25 05:18:45.362666 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:18:45.362674 | orchestrator | Wednesday 25 March 2026 05:18:33 +0000 (0:00:01.157) 0:10:50.499 ******* 2026-03-25 05:18:45.362681 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362689 | orchestrator | 2026-03-25 05:18:45.362696 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:18:45.362703 | orchestrator | Wednesday 25 March 2026 05:18:34 +0000 (0:00:01.192) 0:10:51.691 ******* 2026-03-25 05:18:45.362710 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362717 | orchestrator | 2026-03-25 05:18:45.362724 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:18:45.362731 | orchestrator | Wednesday 25 March 2026 05:18:36 +0000 (0:00:01.353) 0:10:53.045 ******* 2026-03-25 05:18:45.362738 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362745 | orchestrator | 2026-03-25 05:18:45.362752 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:18:45.362759 | orchestrator | Wednesday 25 March 2026 05:18:37 +0000 (0:00:01.137) 0:10:54.182 ******* 2026-03-25 05:18:45.362766 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362773 | orchestrator | 2026-03-25 05:18:45.362779 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:18:45.362786 | orchestrator | Wednesday 25 March 2026 05:18:38 +0000 (0:00:01.184) 0:10:55.367 ******* 2026-03-25 05:18:45.362793 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362800 | orchestrator | 2026-03-25 05:18:45.362807 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:18:45.362813 | orchestrator | Wednesday 25 March 2026 05:18:39 +0000 (0:00:01.136) 0:10:56.504 ******* 2026-03-25 05:18:45.362820 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362826 | orchestrator | 2026-03-25 05:18:45.362833 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:18:45.362838 | orchestrator | Wednesday 25 March 2026 05:18:40 +0000 (0:00:01.118) 0:10:57.622 ******* 2026-03-25 05:18:45.362844 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362851 | orchestrator | 2026-03-25 05:18:45.362858 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:18:45.362864 | orchestrator | Wednesday 25 March 2026 05:18:41 +0000 (0:00:01.134) 0:10:58.756 ******* 2026-03-25 05:18:45.362951 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362960 | orchestrator | 2026-03-25 05:18:45.362967 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:18:45.362974 | orchestrator | Wednesday 25 March 2026 05:18:42 +0000 (0:00:01.168) 0:10:59.925 ******* 2026-03-25 05:18:45.362981 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:45.362990 | orchestrator | 2026-03-25 05:18:45.363005 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:18:45.363013 | orchestrator | Wednesday 25 March 2026 05:18:44 +0000 (0:00:01.229) 0:11:01.154 ******* 2026-03-25 05:18:45.363055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:18:45.363066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:18:45.363075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:18:45.363084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:18:45.363093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:18:45.363101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:18:45.363108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:18:45.363133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2a85f599', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:18:46.596190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:18:46.596317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:18:46.596342 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:18:46.596364 | orchestrator | 2026-03-25 05:18:46.596405 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:18:46.596425 | orchestrator | Wednesday 25 March 2026 05:18:45 +0000 (0:00:01.204) 0:11:02.359 ******* 2026-03-25 05:18:46.596446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:46.596469 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:46.596519 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:46.596540 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:46.596587 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:46.596609 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:46.596627 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:46.596711 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2a85f599', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:18:46.596768 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:19:21.799827 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:19:21.800000 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800020 | orchestrator | 2026-03-25 05:19:21.800034 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:19:21.800046 | orchestrator | Wednesday 25 March 2026 05:18:46 +0000 (0:00:01.241) 0:11:03.600 ******* 2026-03-25 05:19:21.800057 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:19:21.800069 | orchestrator | 2026-03-25 05:19:21.800080 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:19:21.800091 | orchestrator | Wednesday 25 March 2026 05:18:48 +0000 (0:00:01.499) 0:11:05.100 ******* 2026-03-25 05:19:21.800102 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:19:21.800113 | orchestrator | 2026-03-25 05:19:21.800124 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:19:21.800135 | orchestrator | Wednesday 25 March 2026 05:18:49 +0000 (0:00:01.131) 0:11:06.232 ******* 2026-03-25 05:19:21.800146 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:19:21.800157 | orchestrator | 2026-03-25 05:19:21.800168 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:19:21.800203 | orchestrator | Wednesday 25 March 2026 05:18:50 +0000 (0:00:01.453) 0:11:07.686 ******* 2026-03-25 05:19:21.800215 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800226 | orchestrator | 2026-03-25 05:19:21.800237 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:19:21.800247 | orchestrator | Wednesday 25 March 2026 05:18:51 +0000 (0:00:01.117) 0:11:08.804 ******* 2026-03-25 05:19:21.800258 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800269 | orchestrator | 2026-03-25 05:19:21.800280 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:19:21.800291 | orchestrator | Wednesday 25 March 2026 05:18:53 +0000 (0:00:01.279) 0:11:10.083 ******* 2026-03-25 05:19:21.800301 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800312 | orchestrator | 2026-03-25 05:19:21.800323 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:19:21.800334 | orchestrator | Wednesday 25 March 2026 05:18:54 +0000 (0:00:01.199) 0:11:11.282 ******* 2026-03-25 05:19:21.800345 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-25 05:19:21.800356 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:19:21.800368 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-25 05:19:21.800381 | orchestrator | 2026-03-25 05:19:21.800393 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:19:21.800405 | orchestrator | Wednesday 25 March 2026 05:18:55 +0000 (0:00:01.700) 0:11:12.984 ******* 2026-03-25 05:19:21.800418 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 05:19:21.800430 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 05:19:21.800443 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 05:19:21.800455 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800467 | orchestrator | 2026-03-25 05:19:21.800479 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:19:21.800506 | orchestrator | Wednesday 25 March 2026 05:18:57 +0000 (0:00:01.152) 0:11:14.136 ******* 2026-03-25 05:19:21.800518 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800530 | orchestrator | 2026-03-25 05:19:21.800543 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:19:21.800555 | orchestrator | Wednesday 25 March 2026 05:18:58 +0000 (0:00:01.163) 0:11:15.300 ******* 2026-03-25 05:19:21.800567 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:19:21.800580 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:19:21.800591 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:19:21.800603 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:19:21.800616 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:19:21.800629 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:19:21.800641 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:19:21.800653 | orchestrator | 2026-03-25 05:19:21.800664 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:19:21.800675 | orchestrator | Wednesday 25 March 2026 05:19:00 +0000 (0:00:02.104) 0:11:17.405 ******* 2026-03-25 05:19:21.800685 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:19:21.800696 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:19:21.800706 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:19:21.800717 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:19:21.800744 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:19:21.800764 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:19:21.800775 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:19:21.800786 | orchestrator | 2026-03-25 05:19:21.800796 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-25 05:19:21.800807 | orchestrator | Wednesday 25 March 2026 05:19:02 +0000 (0:00:02.263) 0:11:19.668 ******* 2026-03-25 05:19:21.800818 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800828 | orchestrator | 2026-03-25 05:19:21.800839 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-25 05:19:21.800850 | orchestrator | Wednesday 25 March 2026 05:19:03 +0000 (0:00:00.871) 0:11:20.540 ******* 2026-03-25 05:19:21.800860 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800871 | orchestrator | 2026-03-25 05:19:21.800922 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-25 05:19:21.800935 | orchestrator | Wednesday 25 March 2026 05:19:04 +0000 (0:00:00.868) 0:11:21.408 ******* 2026-03-25 05:19:21.800946 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800957 | orchestrator | 2026-03-25 05:19:21.800968 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-25 05:19:21.800978 | orchestrator | Wednesday 25 March 2026 05:19:05 +0000 (0:00:00.799) 0:11:22.208 ******* 2026-03-25 05:19:21.800989 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.800999 | orchestrator | 2026-03-25 05:19:21.801010 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-25 05:19:21.801020 | orchestrator | Wednesday 25 March 2026 05:19:06 +0000 (0:00:01.306) 0:11:23.515 ******* 2026-03-25 05:19:21.801031 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.801042 | orchestrator | 2026-03-25 05:19:21.801052 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-25 05:19:21.801063 | orchestrator | Wednesday 25 March 2026 05:19:07 +0000 (0:00:00.793) 0:11:24.308 ******* 2026-03-25 05:19:21.801073 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 05:19:21.801084 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 05:19:21.801095 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 05:19:21.801105 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.801116 | orchestrator | 2026-03-25 05:19:21.801127 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-25 05:19:21.801137 | orchestrator | Wednesday 25 March 2026 05:19:08 +0000 (0:00:01.107) 0:11:25.416 ******* 2026-03-25 05:19:21.801148 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-25 05:19:21.801158 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-25 05:19:21.801169 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-25 05:19:21.801179 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-25 05:19:21.801190 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-25 05:19:21.801201 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-25 05:19:21.801211 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.801222 | orchestrator | 2026-03-25 05:19:21.801232 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-25 05:19:21.801243 | orchestrator | Wednesday 25 March 2026 05:19:09 +0000 (0:00:01.490) 0:11:26.906 ******* 2026-03-25 05:19:21.801253 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:19:21.801264 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:19:21.801275 | orchestrator | 2026-03-25 05:19:21.801291 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-25 05:19:21.801302 | orchestrator | Wednesday 25 March 2026 05:19:13 +0000 (0:00:03.298) 0:11:30.205 ******* 2026-03-25 05:19:21.801320 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:19:21.801330 | orchestrator | 2026-03-25 05:19:21.801341 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:19:21.801352 | orchestrator | Wednesday 25 March 2026 05:19:15 +0000 (0:00:02.263) 0:11:32.469 ******* 2026-03-25 05:19:21.801363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-25 05:19:21.801374 | orchestrator | 2026-03-25 05:19:21.801385 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 05:19:21.801396 | orchestrator | Wednesday 25 March 2026 05:19:16 +0000 (0:00:01.160) 0:11:33.629 ******* 2026-03-25 05:19:21.801407 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-25 05:19:21.801417 | orchestrator | 2026-03-25 05:19:21.801428 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 05:19:21.801439 | orchestrator | Wednesday 25 March 2026 05:19:17 +0000 (0:00:01.245) 0:11:34.875 ******* 2026-03-25 05:19:21.801450 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:19:21.801461 | orchestrator | 2026-03-25 05:19:21.801472 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 05:19:21.801483 | orchestrator | Wednesday 25 March 2026 05:19:19 +0000 (0:00:01.616) 0:11:36.491 ******* 2026-03-25 05:19:21.801493 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.801504 | orchestrator | 2026-03-25 05:19:21.801515 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 05:19:21.801526 | orchestrator | Wednesday 25 March 2026 05:19:20 +0000 (0:00:01.164) 0:11:37.655 ******* 2026-03-25 05:19:21.801536 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:19:21.801547 | orchestrator | 2026-03-25 05:19:21.801558 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 05:19:21.801575 | orchestrator | Wednesday 25 March 2026 05:19:21 +0000 (0:00:01.141) 0:11:38.798 ******* 2026-03-25 05:20:04.207452 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.207570 | orchestrator | 2026-03-25 05:20:04.207588 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 05:20:04.207600 | orchestrator | Wednesday 25 March 2026 05:19:22 +0000 (0:00:01.151) 0:11:39.949 ******* 2026-03-25 05:20:04.207612 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.207623 | orchestrator | 2026-03-25 05:20:04.207634 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 05:20:04.207645 | orchestrator | Wednesday 25 March 2026 05:19:24 +0000 (0:00:01.692) 0:11:41.642 ******* 2026-03-25 05:20:04.207657 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.207668 | orchestrator | 2026-03-25 05:20:04.207679 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 05:20:04.207689 | orchestrator | Wednesday 25 March 2026 05:19:25 +0000 (0:00:01.132) 0:11:42.774 ******* 2026-03-25 05:20:04.207700 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.207711 | orchestrator | 2026-03-25 05:20:04.207722 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 05:20:04.207732 | orchestrator | Wednesday 25 March 2026 05:19:26 +0000 (0:00:01.132) 0:11:43.907 ******* 2026-03-25 05:20:04.207743 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.207754 | orchestrator | 2026-03-25 05:20:04.207765 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 05:20:04.207775 | orchestrator | Wednesday 25 March 2026 05:19:28 +0000 (0:00:01.619) 0:11:45.526 ******* 2026-03-25 05:20:04.207786 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.207797 | orchestrator | 2026-03-25 05:20:04.207808 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 05:20:04.207818 | orchestrator | Wednesday 25 March 2026 05:19:30 +0000 (0:00:01.582) 0:11:47.108 ******* 2026-03-25 05:20:04.207829 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.207840 | orchestrator | 2026-03-25 05:20:04.207850 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:20:04.207884 | orchestrator | Wednesday 25 March 2026 05:19:30 +0000 (0:00:00.790) 0:11:47.899 ******* 2026-03-25 05:20:04.207895 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.207954 | orchestrator | 2026-03-25 05:20:04.207966 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:20:04.207977 | orchestrator | Wednesday 25 March 2026 05:19:31 +0000 (0:00:00.804) 0:11:48.704 ******* 2026-03-25 05:20:04.207990 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208003 | orchestrator | 2026-03-25 05:20:04.208015 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:20:04.208028 | orchestrator | Wednesday 25 March 2026 05:19:32 +0000 (0:00:00.792) 0:11:49.497 ******* 2026-03-25 05:20:04.208040 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208054 | orchestrator | 2026-03-25 05:20:04.208067 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:20:04.208078 | orchestrator | Wednesday 25 March 2026 05:19:33 +0000 (0:00:00.795) 0:11:50.293 ******* 2026-03-25 05:20:04.208090 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208103 | orchestrator | 2026-03-25 05:20:04.208115 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:20:04.208127 | orchestrator | Wednesday 25 March 2026 05:19:34 +0000 (0:00:00.828) 0:11:51.121 ******* 2026-03-25 05:20:04.208140 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208153 | orchestrator | 2026-03-25 05:20:04.208165 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:20:04.208178 | orchestrator | Wednesday 25 March 2026 05:19:34 +0000 (0:00:00.797) 0:11:51.919 ******* 2026-03-25 05:20:04.208190 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208202 | orchestrator | 2026-03-25 05:20:04.208214 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:20:04.208227 | orchestrator | Wednesday 25 March 2026 05:19:35 +0000 (0:00:00.822) 0:11:52.742 ******* 2026-03-25 05:20:04.208238 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.208251 | orchestrator | 2026-03-25 05:20:04.208279 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:20:04.208292 | orchestrator | Wednesday 25 March 2026 05:19:36 +0000 (0:00:00.867) 0:11:53.609 ******* 2026-03-25 05:20:04.208305 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.208318 | orchestrator | 2026-03-25 05:20:04.208330 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:20:04.208343 | orchestrator | Wednesday 25 March 2026 05:19:37 +0000 (0:00:00.843) 0:11:54.453 ******* 2026-03-25 05:20:04.208354 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.208365 | orchestrator | 2026-03-25 05:20:04.208375 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:20:04.208386 | orchestrator | Wednesday 25 March 2026 05:19:38 +0000 (0:00:00.820) 0:11:55.273 ******* 2026-03-25 05:20:04.208396 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208407 | orchestrator | 2026-03-25 05:20:04.208417 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:20:04.208428 | orchestrator | Wednesday 25 March 2026 05:19:39 +0000 (0:00:00.860) 0:11:56.134 ******* 2026-03-25 05:20:04.208439 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208450 | orchestrator | 2026-03-25 05:20:04.208461 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:20:04.208472 | orchestrator | Wednesday 25 March 2026 05:19:39 +0000 (0:00:00.764) 0:11:56.899 ******* 2026-03-25 05:20:04.208483 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208494 | orchestrator | 2026-03-25 05:20:04.208504 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:20:04.208515 | orchestrator | Wednesday 25 March 2026 05:19:40 +0000 (0:00:00.768) 0:11:57.668 ******* 2026-03-25 05:20:04.208526 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208536 | orchestrator | 2026-03-25 05:20:04.208547 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:20:04.208566 | orchestrator | Wednesday 25 March 2026 05:19:41 +0000 (0:00:00.764) 0:11:58.432 ******* 2026-03-25 05:20:04.208577 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208588 | orchestrator | 2026-03-25 05:20:04.208615 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:20:04.208626 | orchestrator | Wednesday 25 March 2026 05:19:42 +0000 (0:00:00.867) 0:11:59.300 ******* 2026-03-25 05:20:04.208637 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208648 | orchestrator | 2026-03-25 05:20:04.208658 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:20:04.208669 | orchestrator | Wednesday 25 March 2026 05:19:43 +0000 (0:00:00.829) 0:12:00.130 ******* 2026-03-25 05:20:04.208680 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208691 | orchestrator | 2026-03-25 05:20:04.208701 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:20:04.208713 | orchestrator | Wednesday 25 March 2026 05:19:43 +0000 (0:00:00.777) 0:12:00.908 ******* 2026-03-25 05:20:04.208723 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208734 | orchestrator | 2026-03-25 05:20:04.208745 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:20:04.208755 | orchestrator | Wednesday 25 March 2026 05:19:44 +0000 (0:00:00.794) 0:12:01.702 ******* 2026-03-25 05:20:04.208766 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208777 | orchestrator | 2026-03-25 05:20:04.208787 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:20:04.208798 | orchestrator | Wednesday 25 March 2026 05:19:45 +0000 (0:00:00.834) 0:12:02.537 ******* 2026-03-25 05:20:04.208809 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208819 | orchestrator | 2026-03-25 05:20:04.208830 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:20:04.208841 | orchestrator | Wednesday 25 March 2026 05:19:46 +0000 (0:00:00.759) 0:12:03.296 ******* 2026-03-25 05:20:04.208851 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208862 | orchestrator | 2026-03-25 05:20:04.208873 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:20:04.208883 | orchestrator | Wednesday 25 March 2026 05:19:47 +0000 (0:00:00.787) 0:12:04.084 ******* 2026-03-25 05:20:04.208894 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.208926 | orchestrator | 2026-03-25 05:20:04.208938 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:20:04.208949 | orchestrator | Wednesday 25 March 2026 05:19:47 +0000 (0:00:00.804) 0:12:04.888 ******* 2026-03-25 05:20:04.208960 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.208970 | orchestrator | 2026-03-25 05:20:04.208981 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:20:04.208992 | orchestrator | Wednesday 25 March 2026 05:19:49 +0000 (0:00:01.696) 0:12:06.585 ******* 2026-03-25 05:20:04.209002 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.209013 | orchestrator | 2026-03-25 05:20:04.209024 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:20:04.209034 | orchestrator | Wednesday 25 March 2026 05:19:51 +0000 (0:00:02.063) 0:12:08.648 ******* 2026-03-25 05:20:04.209045 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-25 05:20:04.209057 | orchestrator | 2026-03-25 05:20:04.209068 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 05:20:04.209078 | orchestrator | Wednesday 25 March 2026 05:19:52 +0000 (0:00:01.142) 0:12:09.791 ******* 2026-03-25 05:20:04.209089 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.209100 | orchestrator | 2026-03-25 05:20:04.209110 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 05:20:04.209121 | orchestrator | Wednesday 25 March 2026 05:19:53 +0000 (0:00:01.147) 0:12:10.938 ******* 2026-03-25 05:20:04.209131 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.209149 | orchestrator | 2026-03-25 05:20:04.209160 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 05:20:04.209171 | orchestrator | Wednesday 25 March 2026 05:19:55 +0000 (0:00:01.163) 0:12:12.102 ******* 2026-03-25 05:20:04.209181 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 05:20:04.209197 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 05:20:04.209208 | orchestrator | 2026-03-25 05:20:04.209219 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 05:20:04.209230 | orchestrator | Wednesday 25 March 2026 05:19:56 +0000 (0:00:01.896) 0:12:13.998 ******* 2026-03-25 05:20:04.209241 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.209251 | orchestrator | 2026-03-25 05:20:04.209262 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 05:20:04.209273 | orchestrator | Wednesday 25 March 2026 05:19:58 +0000 (0:00:01.482) 0:12:15.480 ******* 2026-03-25 05:20:04.209283 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.209294 | orchestrator | 2026-03-25 05:20:04.209304 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 05:20:04.209315 | orchestrator | Wednesday 25 March 2026 05:19:59 +0000 (0:00:01.216) 0:12:16.697 ******* 2026-03-25 05:20:04.209326 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.209336 | orchestrator | 2026-03-25 05:20:04.209347 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:20:04.209357 | orchestrator | Wednesday 25 March 2026 05:20:00 +0000 (0:00:00.781) 0:12:17.479 ******* 2026-03-25 05:20:04.209368 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:04.209379 | orchestrator | 2026-03-25 05:20:04.209389 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:20:04.209400 | orchestrator | Wednesday 25 March 2026 05:20:01 +0000 (0:00:00.791) 0:12:18.271 ******* 2026-03-25 05:20:04.209411 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-25 05:20:04.209422 | orchestrator | 2026-03-25 05:20:04.209432 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 05:20:04.209443 | orchestrator | Wednesday 25 March 2026 05:20:02 +0000 (0:00:01.135) 0:12:19.406 ******* 2026-03-25 05:20:04.209454 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:04.209465 | orchestrator | 2026-03-25 05:20:04.209475 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 05:20:04.209493 | orchestrator | Wednesday 25 March 2026 05:20:04 +0000 (0:00:01.801) 0:12:21.208 ******* 2026-03-25 05:20:44.382386 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 05:20:44.382525 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 05:20:44.382549 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 05:20:44.382564 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.382581 | orchestrator | 2026-03-25 05:20:44.382596 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 05:20:44.382611 | orchestrator | Wednesday 25 March 2026 05:20:05 +0000 (0:00:01.162) 0:12:22.370 ******* 2026-03-25 05:20:44.382625 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.382639 | orchestrator | 2026-03-25 05:20:44.382654 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 05:20:44.382670 | orchestrator | Wednesday 25 March 2026 05:20:06 +0000 (0:00:01.093) 0:12:23.464 ******* 2026-03-25 05:20:44.382685 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.382701 | orchestrator | 2026-03-25 05:20:44.382717 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 05:20:44.382733 | orchestrator | Wednesday 25 March 2026 05:20:07 +0000 (0:00:01.190) 0:12:24.654 ******* 2026-03-25 05:20:44.382750 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.382766 | orchestrator | 2026-03-25 05:20:44.382815 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 05:20:44.382832 | orchestrator | Wednesday 25 March 2026 05:20:08 +0000 (0:00:01.136) 0:12:25.791 ******* 2026-03-25 05:20:44.382849 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.382866 | orchestrator | 2026-03-25 05:20:44.382879 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 05:20:44.382889 | orchestrator | Wednesday 25 March 2026 05:20:09 +0000 (0:00:01.156) 0:12:26.948 ******* 2026-03-25 05:20:44.382899 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.382910 | orchestrator | 2026-03-25 05:20:44.382920 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:20:44.382996 | orchestrator | Wednesday 25 March 2026 05:20:10 +0000 (0:00:00.776) 0:12:27.724 ******* 2026-03-25 05:20:44.383007 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:44.383030 | orchestrator | 2026-03-25 05:20:44.383040 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:20:44.383050 | orchestrator | Wednesday 25 March 2026 05:20:12 +0000 (0:00:02.246) 0:12:29.970 ******* 2026-03-25 05:20:44.383066 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:44.383081 | orchestrator | 2026-03-25 05:20:44.383091 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:20:44.383101 | orchestrator | Wednesday 25 March 2026 05:20:13 +0000 (0:00:00.798) 0:12:30.768 ******* 2026-03-25 05:20:44.383110 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-25 05:20:44.383120 | orchestrator | 2026-03-25 05:20:44.383130 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 05:20:44.383139 | orchestrator | Wednesday 25 March 2026 05:20:15 +0000 (0:00:01.264) 0:12:32.033 ******* 2026-03-25 05:20:44.383148 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383167 | orchestrator | 2026-03-25 05:20:44.383177 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 05:20:44.383186 | orchestrator | Wednesday 25 March 2026 05:20:16 +0000 (0:00:01.279) 0:12:33.313 ******* 2026-03-25 05:20:44.383196 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383205 | orchestrator | 2026-03-25 05:20:44.383215 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 05:20:44.383225 | orchestrator | Wednesday 25 March 2026 05:20:17 +0000 (0:00:01.159) 0:12:34.473 ******* 2026-03-25 05:20:44.383234 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383243 | orchestrator | 2026-03-25 05:20:44.383269 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 05:20:44.383279 | orchestrator | Wednesday 25 March 2026 05:20:18 +0000 (0:00:01.155) 0:12:35.628 ******* 2026-03-25 05:20:44.383288 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383298 | orchestrator | 2026-03-25 05:20:44.383307 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 05:20:44.383317 | orchestrator | Wednesday 25 March 2026 05:20:19 +0000 (0:00:01.160) 0:12:36.788 ******* 2026-03-25 05:20:44.383334 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383350 | orchestrator | 2026-03-25 05:20:44.383366 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 05:20:44.383382 | orchestrator | Wednesday 25 March 2026 05:20:20 +0000 (0:00:01.144) 0:12:37.933 ******* 2026-03-25 05:20:44.383397 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383413 | orchestrator | 2026-03-25 05:20:44.383429 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 05:20:44.383446 | orchestrator | Wednesday 25 March 2026 05:20:22 +0000 (0:00:01.132) 0:12:39.066 ******* 2026-03-25 05:20:44.383462 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383478 | orchestrator | 2026-03-25 05:20:44.383496 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 05:20:44.383507 | orchestrator | Wednesday 25 March 2026 05:20:23 +0000 (0:00:01.280) 0:12:40.347 ******* 2026-03-25 05:20:44.383516 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383536 | orchestrator | 2026-03-25 05:20:44.383545 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 05:20:44.383555 | orchestrator | Wednesday 25 March 2026 05:20:24 +0000 (0:00:01.140) 0:12:41.487 ******* 2026-03-25 05:20:44.383564 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:20:44.383573 | orchestrator | 2026-03-25 05:20:44.383583 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:20:44.383592 | orchestrator | Wednesday 25 March 2026 05:20:25 +0000 (0:00:00.821) 0:12:42.309 ******* 2026-03-25 05:20:44.383602 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-25 05:20:44.383613 | orchestrator | 2026-03-25 05:20:44.383623 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 05:20:44.383653 | orchestrator | Wednesday 25 March 2026 05:20:26 +0000 (0:00:01.137) 0:12:43.446 ******* 2026-03-25 05:20:44.383664 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-25 05:20:44.383674 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-25 05:20:44.383683 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-25 05:20:44.383693 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-25 05:20:44.383702 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-25 05:20:44.383711 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-25 05:20:44.383721 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-25 05:20:44.383730 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-25 05:20:44.383740 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 05:20:44.383749 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 05:20:44.383758 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 05:20:44.383767 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 05:20:44.383777 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 05:20:44.383786 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 05:20:44.383796 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-25 05:20:44.383805 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-25 05:20:44.383814 | orchestrator | 2026-03-25 05:20:44.383824 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:20:44.383833 | orchestrator | Wednesday 25 March 2026 05:20:33 +0000 (0:00:06.622) 0:12:50.069 ******* 2026-03-25 05:20:44.383842 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383852 | orchestrator | 2026-03-25 05:20:44.383861 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:20:44.383871 | orchestrator | Wednesday 25 March 2026 05:20:33 +0000 (0:00:00.810) 0:12:50.879 ******* 2026-03-25 05:20:44.383880 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383889 | orchestrator | 2026-03-25 05:20:44.383899 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:20:44.383908 | orchestrator | Wednesday 25 March 2026 05:20:34 +0000 (0:00:00.786) 0:12:51.666 ******* 2026-03-25 05:20:44.383918 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383956 | orchestrator | 2026-03-25 05:20:44.383966 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:20:44.383976 | orchestrator | Wednesday 25 March 2026 05:20:35 +0000 (0:00:00.801) 0:12:52.467 ******* 2026-03-25 05:20:44.383985 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.383995 | orchestrator | 2026-03-25 05:20:44.384004 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:20:44.384013 | orchestrator | Wednesday 25 March 2026 05:20:36 +0000 (0:00:00.808) 0:12:53.276 ******* 2026-03-25 05:20:44.384023 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384032 | orchestrator | 2026-03-25 05:20:44.384048 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:20:44.384057 | orchestrator | Wednesday 25 March 2026 05:20:37 +0000 (0:00:00.778) 0:12:54.055 ******* 2026-03-25 05:20:44.384067 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384076 | orchestrator | 2026-03-25 05:20:44.384086 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:20:44.384095 | orchestrator | Wednesday 25 March 2026 05:20:37 +0000 (0:00:00.801) 0:12:54.856 ******* 2026-03-25 05:20:44.384105 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384114 | orchestrator | 2026-03-25 05:20:44.384129 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:20:44.384139 | orchestrator | Wednesday 25 March 2026 05:20:38 +0000 (0:00:00.804) 0:12:55.661 ******* 2026-03-25 05:20:44.384149 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384158 | orchestrator | 2026-03-25 05:20:44.384168 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:20:44.384177 | orchestrator | Wednesday 25 March 2026 05:20:39 +0000 (0:00:00.902) 0:12:56.563 ******* 2026-03-25 05:20:44.384187 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384196 | orchestrator | 2026-03-25 05:20:44.384206 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:20:44.384215 | orchestrator | Wednesday 25 March 2026 05:20:40 +0000 (0:00:00.764) 0:12:57.328 ******* 2026-03-25 05:20:44.384224 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384234 | orchestrator | 2026-03-25 05:20:44.384243 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:20:44.384252 | orchestrator | Wednesday 25 March 2026 05:20:41 +0000 (0:00:00.760) 0:12:58.089 ******* 2026-03-25 05:20:44.384262 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384271 | orchestrator | 2026-03-25 05:20:44.384281 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:20:44.384290 | orchestrator | Wednesday 25 March 2026 05:20:41 +0000 (0:00:00.755) 0:12:58.846 ******* 2026-03-25 05:20:44.384299 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384309 | orchestrator | 2026-03-25 05:20:44.384318 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:20:44.384327 | orchestrator | Wednesday 25 March 2026 05:20:42 +0000 (0:00:00.796) 0:12:59.643 ******* 2026-03-25 05:20:44.384337 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384346 | orchestrator | 2026-03-25 05:20:44.384355 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:20:44.384365 | orchestrator | Wednesday 25 March 2026 05:20:43 +0000 (0:00:00.910) 0:13:00.554 ******* 2026-03-25 05:20:44.384374 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:20:44.384383 | orchestrator | 2026-03-25 05:20:44.384393 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:20:44.384408 | orchestrator | Wednesday 25 March 2026 05:20:44 +0000 (0:00:00.829) 0:13:01.383 ******* 2026-03-25 05:21:32.563482 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563578 | orchestrator | 2026-03-25 05:21:32.563588 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:21:32.563598 | orchestrator | Wednesday 25 March 2026 05:20:45 +0000 (0:00:00.864) 0:13:02.248 ******* 2026-03-25 05:21:32.563604 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563611 | orchestrator | 2026-03-25 05:21:32.563619 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:21:32.563626 | orchestrator | Wednesday 25 March 2026 05:20:46 +0000 (0:00:00.770) 0:13:03.019 ******* 2026-03-25 05:21:32.563633 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563640 | orchestrator | 2026-03-25 05:21:32.563647 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:21:32.563655 | orchestrator | Wednesday 25 March 2026 05:20:46 +0000 (0:00:00.768) 0:13:03.787 ******* 2026-03-25 05:21:32.563689 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563696 | orchestrator | 2026-03-25 05:21:32.563702 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:21:32.563709 | orchestrator | Wednesday 25 March 2026 05:20:47 +0000 (0:00:00.766) 0:13:04.554 ******* 2026-03-25 05:21:32.563715 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563722 | orchestrator | 2026-03-25 05:21:32.563728 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:21:32.563734 | orchestrator | Wednesday 25 March 2026 05:20:48 +0000 (0:00:00.777) 0:13:05.331 ******* 2026-03-25 05:21:32.563741 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563747 | orchestrator | 2026-03-25 05:21:32.563753 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:21:32.563760 | orchestrator | Wednesday 25 March 2026 05:20:49 +0000 (0:00:00.811) 0:13:06.143 ******* 2026-03-25 05:21:32.563766 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563772 | orchestrator | 2026-03-25 05:21:32.563778 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:21:32.563784 | orchestrator | Wednesday 25 March 2026 05:20:49 +0000 (0:00:00.770) 0:13:06.914 ******* 2026-03-25 05:21:32.563791 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 05:21:32.563797 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 05:21:32.563803 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 05:21:32.563810 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563816 | orchestrator | 2026-03-25 05:21:32.563822 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:21:32.563828 | orchestrator | Wednesday 25 March 2026 05:20:51 +0000 (0:00:01.103) 0:13:08.017 ******* 2026-03-25 05:21:32.563835 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 05:21:32.563841 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 05:21:32.563847 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 05:21:32.563854 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563860 | orchestrator | 2026-03-25 05:21:32.563866 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:21:32.563872 | orchestrator | Wednesday 25 March 2026 05:20:52 +0000 (0:00:01.078) 0:13:09.096 ******* 2026-03-25 05:21:32.563879 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 05:21:32.563885 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 05:21:32.563891 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 05:21:32.563897 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563902 | orchestrator | 2026-03-25 05:21:32.563921 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:21:32.563928 | orchestrator | Wednesday 25 March 2026 05:20:53 +0000 (0:00:01.100) 0:13:10.197 ******* 2026-03-25 05:21:32.563935 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.563941 | orchestrator | 2026-03-25 05:21:32.563992 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:21:32.563999 | orchestrator | Wednesday 25 March 2026 05:20:53 +0000 (0:00:00.807) 0:13:11.004 ******* 2026-03-25 05:21:32.564007 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-25 05:21:32.564013 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.564019 | orchestrator | 2026-03-25 05:21:32.564025 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:21:32.564030 | orchestrator | Wednesday 25 March 2026 05:20:55 +0000 (0:00:01.059) 0:13:12.064 ******* 2026-03-25 05:21:32.564037 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564044 | orchestrator | 2026-03-25 05:21:32.564050 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:21:32.564057 | orchestrator | Wednesday 25 March 2026 05:20:56 +0000 (0:00:01.573) 0:13:13.637 ******* 2026-03-25 05:21:32.564069 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564076 | orchestrator | 2026-03-25 05:21:32.564083 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-25 05:21:32.564089 | orchestrator | Wednesday 25 March 2026 05:20:57 +0000 (0:00:00.788) 0:13:14.426 ******* 2026-03-25 05:21:32.564097 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-03-25 05:21:32.564104 | orchestrator | 2026-03-25 05:21:32.564110 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-25 05:21:32.564116 | orchestrator | Wednesday 25 March 2026 05:20:58 +0000 (0:00:01.165) 0:13:15.592 ******* 2026-03-25 05:21:32.564122 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-03-25 05:21:32.564128 | orchestrator | 2026-03-25 05:21:32.564134 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-25 05:21:32.564140 | orchestrator | Wednesday 25 March 2026 05:21:01 +0000 (0:00:03.249) 0:13:18.842 ******* 2026-03-25 05:21:32.564146 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.564151 | orchestrator | 2026-03-25 05:21:32.564158 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-25 05:21:32.564181 | orchestrator | Wednesday 25 March 2026 05:21:03 +0000 (0:00:01.190) 0:13:20.033 ******* 2026-03-25 05:21:32.564189 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564195 | orchestrator | 2026-03-25 05:21:32.564201 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-25 05:21:32.564207 | orchestrator | Wednesday 25 March 2026 05:21:04 +0000 (0:00:01.178) 0:13:21.212 ******* 2026-03-25 05:21:32.564213 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564219 | orchestrator | 2026-03-25 05:21:32.564225 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-25 05:21:32.564231 | orchestrator | Wednesday 25 March 2026 05:21:05 +0000 (0:00:01.164) 0:13:22.376 ******* 2026-03-25 05:21:32.564238 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:21:32.564244 | orchestrator | 2026-03-25 05:21:32.564251 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-25 05:21:32.564257 | orchestrator | Wednesday 25 March 2026 05:21:07 +0000 (0:00:02.104) 0:13:24.481 ******* 2026-03-25 05:21:32.564263 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564270 | orchestrator | 2026-03-25 05:21:32.564274 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-25 05:21:32.564278 | orchestrator | Wednesday 25 March 2026 05:21:09 +0000 (0:00:01.708) 0:13:26.190 ******* 2026-03-25 05:21:32.564283 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564287 | orchestrator | 2026-03-25 05:21:32.564291 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-25 05:21:32.564295 | orchestrator | Wednesday 25 March 2026 05:21:10 +0000 (0:00:01.629) 0:13:27.820 ******* 2026-03-25 05:21:32.564299 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564303 | orchestrator | 2026-03-25 05:21:32.564308 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-25 05:21:32.564312 | orchestrator | Wednesday 25 March 2026 05:21:12 +0000 (0:00:01.556) 0:13:29.377 ******* 2026-03-25 05:21:32.564316 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:21:32.564320 | orchestrator | 2026-03-25 05:21:32.564325 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-25 05:21:32.564329 | orchestrator | Wednesday 25 March 2026 05:21:13 +0000 (0:00:01.578) 0:13:30.955 ******* 2026-03-25 05:21:32.564333 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:21:32.564337 | orchestrator | 2026-03-25 05:21:32.564341 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-25 05:21:32.564345 | orchestrator | Wednesday 25 March 2026 05:21:15 +0000 (0:00:01.611) 0:13:32.567 ******* 2026-03-25 05:21:32.564349 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:21:32.564354 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-25 05:21:32.564364 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 05:21:32.564368 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-25 05:21:32.564372 | orchestrator | 2026-03-25 05:21:32.564377 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-25 05:21:32.564381 | orchestrator | Wednesday 25 March 2026 05:21:19 +0000 (0:00:03.925) 0:13:36.492 ******* 2026-03-25 05:21:32.564385 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:21:32.564390 | orchestrator | 2026-03-25 05:21:32.564394 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-25 05:21:32.564398 | orchestrator | Wednesday 25 March 2026 05:21:21 +0000 (0:00:02.005) 0:13:38.497 ******* 2026-03-25 05:21:32.564401 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564405 | orchestrator | 2026-03-25 05:21:32.564409 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-25 05:21:32.564412 | orchestrator | Wednesday 25 March 2026 05:21:22 +0000 (0:00:01.201) 0:13:39.699 ******* 2026-03-25 05:21:32.564421 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564425 | orchestrator | 2026-03-25 05:21:32.564428 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-25 05:21:32.564432 | orchestrator | Wednesday 25 March 2026 05:21:23 +0000 (0:00:01.164) 0:13:40.863 ******* 2026-03-25 05:21:32.564436 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564439 | orchestrator | 2026-03-25 05:21:32.564443 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-25 05:21:32.564447 | orchestrator | Wednesday 25 March 2026 05:21:25 +0000 (0:00:01.783) 0:13:42.646 ******* 2026-03-25 05:21:32.564451 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:21:32.564454 | orchestrator | 2026-03-25 05:21:32.564458 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-25 05:21:32.564462 | orchestrator | Wednesday 25 March 2026 05:21:27 +0000 (0:00:01.485) 0:13:44.132 ******* 2026-03-25 05:21:32.564465 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.564469 | orchestrator | 2026-03-25 05:21:32.564473 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-25 05:21:32.564476 | orchestrator | Wednesday 25 March 2026 05:21:27 +0000 (0:00:00.769) 0:13:44.902 ******* 2026-03-25 05:21:32.564480 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-03-25 05:21:32.564484 | orchestrator | 2026-03-25 05:21:32.564488 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-25 05:21:32.564491 | orchestrator | Wednesday 25 March 2026 05:21:29 +0000 (0:00:01.138) 0:13:46.040 ******* 2026-03-25 05:21:32.564495 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.564499 | orchestrator | 2026-03-25 05:21:32.564503 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-25 05:21:32.564506 | orchestrator | Wednesday 25 March 2026 05:21:30 +0000 (0:00:01.144) 0:13:47.185 ******* 2026-03-25 05:21:32.564510 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:21:32.564514 | orchestrator | 2026-03-25 05:21:32.564517 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-25 05:21:32.564521 | orchestrator | Wednesday 25 March 2026 05:21:31 +0000 (0:00:01.228) 0:13:48.413 ******* 2026-03-25 05:21:32.564525 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-03-25 05:21:32.564528 | orchestrator | 2026-03-25 05:21:32.564537 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-25 05:22:41.735359 | orchestrator | Wednesday 25 March 2026 05:21:32 +0000 (0:00:01.152) 0:13:49.566 ******* 2026-03-25 05:22:41.735477 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:22:41.735495 | orchestrator | 2026-03-25 05:22:41.735508 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-25 05:22:41.735520 | orchestrator | Wednesday 25 March 2026 05:21:34 +0000 (0:00:02.372) 0:13:51.939 ******* 2026-03-25 05:22:41.735531 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:22:41.735563 | orchestrator | 2026-03-25 05:22:41.735575 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-25 05:22:41.735586 | orchestrator | Wednesday 25 March 2026 05:21:36 +0000 (0:00:01.988) 0:13:53.928 ******* 2026-03-25 05:22:41.735596 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:22:41.735607 | orchestrator | 2026-03-25 05:22:41.735617 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-25 05:22:41.735628 | orchestrator | Wednesday 25 March 2026 05:21:39 +0000 (0:00:02.515) 0:13:56.443 ******* 2026-03-25 05:22:41.735639 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:22:41.735651 | orchestrator | 2026-03-25 05:22:41.735662 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-25 05:22:41.735672 | orchestrator | Wednesday 25 March 2026 05:21:42 +0000 (0:00:02.861) 0:13:59.304 ******* 2026-03-25 05:22:41.735683 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-03-25 05:22:41.735695 | orchestrator | 2026-03-25 05:22:41.735706 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-25 05:22:41.735717 | orchestrator | Wednesday 25 March 2026 05:21:43 +0000 (0:00:01.189) 0:14:00.493 ******* 2026-03-25 05:22:41.735728 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-25 05:22:41.735738 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:22:41.735749 | orchestrator | 2026-03-25 05:22:41.735760 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-25 05:22:41.735770 | orchestrator | Wednesday 25 March 2026 05:22:06 +0000 (0:00:22.980) 0:14:23.474 ******* 2026-03-25 05:22:41.735781 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:22:41.735791 | orchestrator | 2026-03-25 05:22:41.735802 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-25 05:22:41.735813 | orchestrator | Wednesday 25 March 2026 05:22:09 +0000 (0:00:02.715) 0:14:26.190 ******* 2026-03-25 05:22:41.735823 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:22:41.735834 | orchestrator | 2026-03-25 05:22:41.735844 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-25 05:22:41.735855 | orchestrator | Wednesday 25 March 2026 05:22:09 +0000 (0:00:00.785) 0:14:26.975 ******* 2026-03-25 05:22:41.735868 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-25 05:22:41.735897 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-25 05:22:41.735911 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-25 05:22:41.735924 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-25 05:22:41.735938 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-25 05:22:41.735980 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}])  2026-03-25 05:22:41.736027 | orchestrator | 2026-03-25 05:22:41.736051 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-25 05:22:41.736069 | orchestrator | Wednesday 25 March 2026 05:22:19 +0000 (0:00:09.920) 0:14:36.896 ******* 2026-03-25 05:22:41.736088 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:22:41.736108 | orchestrator | 2026-03-25 05:22:41.736131 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:22:41.736151 | orchestrator | Wednesday 25 March 2026 05:22:22 +0000 (0:00:02.262) 0:14:39.159 ******* 2026-03-25 05:22:41.736170 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:22:41.736185 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-25 05:22:41.736197 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-25 05:22:41.736209 | orchestrator | 2026-03-25 05:22:41.736221 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:22:41.736234 | orchestrator | Wednesday 25 March 2026 05:22:23 +0000 (0:00:01.543) 0:14:40.703 ******* 2026-03-25 05:22:41.736246 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 05:22:41.736258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 05:22:41.736268 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 05:22:41.736279 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:22:41.736290 | orchestrator | 2026-03-25 05:22:41.736300 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-25 05:22:41.736311 | orchestrator | Wednesday 25 March 2026 05:22:24 +0000 (0:00:01.064) 0:14:41.767 ******* 2026-03-25 05:22:41.736321 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:22:41.736332 | orchestrator | 2026-03-25 05:22:41.736343 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-25 05:22:41.736353 | orchestrator | Wednesday 25 March 2026 05:22:25 +0000 (0:00:00.794) 0:14:42.562 ******* 2026-03-25 05:22:41.736364 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:22:41.736375 | orchestrator | 2026-03-25 05:22:41.736385 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-25 05:22:41.736396 | orchestrator | 2026-03-25 05:22:41.736406 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-25 05:22:41.736417 | orchestrator | Wednesday 25 March 2026 05:22:27 +0000 (0:00:02.226) 0:14:44.788 ******* 2026-03-25 05:22:41.736428 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:22:41.736438 | orchestrator | 2026-03-25 05:22:41.736449 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-25 05:22:41.736460 | orchestrator | Wednesday 25 March 2026 05:22:28 +0000 (0:00:01.210) 0:14:45.999 ******* 2026-03-25 05:22:41.736470 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:22:41.736481 | orchestrator | 2026-03-25 05:22:41.736491 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-25 05:22:41.736502 | orchestrator | Wednesday 25 March 2026 05:22:29 +0000 (0:00:00.793) 0:14:46.792 ******* 2026-03-25 05:22:41.736512 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:22:41.736523 | orchestrator | 2026-03-25 05:22:41.736533 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-25 05:22:41.736553 | orchestrator | Wednesday 25 March 2026 05:22:30 +0000 (0:00:00.787) 0:14:47.580 ******* 2026-03-25 05:22:41.736564 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:22:41.736574 | orchestrator | 2026-03-25 05:22:41.736585 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:22:41.736603 | orchestrator | Wednesday 25 March 2026 05:22:31 +0000 (0:00:00.784) 0:14:48.365 ******* 2026-03-25 05:22:41.736614 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-25 05:22:41.736624 | orchestrator | 2026-03-25 05:22:41.736635 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:22:41.736645 | orchestrator | Wednesday 25 March 2026 05:22:32 +0000 (0:00:01.263) 0:14:49.629 ******* 2026-03-25 05:22:41.736656 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:22:41.736667 | orchestrator | 2026-03-25 05:22:41.736677 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:22:41.736688 | orchestrator | Wednesday 25 March 2026 05:22:34 +0000 (0:00:01.523) 0:14:51.152 ******* 2026-03-25 05:22:41.736698 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:22:41.736709 | orchestrator | 2026-03-25 05:22:41.736720 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:22:41.736730 | orchestrator | Wednesday 25 March 2026 05:22:35 +0000 (0:00:01.217) 0:14:52.370 ******* 2026-03-25 05:22:41.736741 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:22:41.736752 | orchestrator | 2026-03-25 05:22:41.736762 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:22:41.736773 | orchestrator | Wednesday 25 March 2026 05:22:36 +0000 (0:00:01.554) 0:14:53.925 ******* 2026-03-25 05:22:41.736784 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:22:41.736794 | orchestrator | 2026-03-25 05:22:41.736805 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:22:41.736816 | orchestrator | Wednesday 25 March 2026 05:22:38 +0000 (0:00:01.203) 0:14:55.128 ******* 2026-03-25 05:22:41.736826 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:22:41.736837 | orchestrator | 2026-03-25 05:22:41.736848 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:22:41.736858 | orchestrator | Wednesday 25 March 2026 05:22:39 +0000 (0:00:01.200) 0:14:56.329 ******* 2026-03-25 05:22:41.736869 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:22:41.736880 | orchestrator | 2026-03-25 05:22:41.736890 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:22:41.736901 | orchestrator | Wednesday 25 March 2026 05:22:40 +0000 (0:00:01.199) 0:14:57.528 ******* 2026-03-25 05:22:41.736912 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:22:41.736922 | orchestrator | 2026-03-25 05:22:41.736933 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:22:41.736953 | orchestrator | Wednesday 25 March 2026 05:22:41 +0000 (0:00:01.209) 0:14:58.738 ******* 2026-03-25 05:23:07.267436 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:23:07.267548 | orchestrator | 2026-03-25 05:23:07.267563 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:23:07.267575 | orchestrator | Wednesday 25 March 2026 05:22:42 +0000 (0:00:01.151) 0:14:59.889 ******* 2026-03-25 05:23:07.267586 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:23:07.267596 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:23:07.267606 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:23:07.267616 | orchestrator | 2026-03-25 05:23:07.267626 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:23:07.267636 | orchestrator | Wednesday 25 March 2026 05:22:44 +0000 (0:00:02.010) 0:15:01.900 ******* 2026-03-25 05:23:07.267645 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:23:07.267655 | orchestrator | 2026-03-25 05:23:07.267664 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:23:07.267674 | orchestrator | Wednesday 25 March 2026 05:22:46 +0000 (0:00:01.281) 0:15:03.182 ******* 2026-03-25 05:23:07.267705 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:23:07.267715 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:23:07.267725 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:23:07.267735 | orchestrator | 2026-03-25 05:23:07.267744 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:23:07.267754 | orchestrator | Wednesday 25 March 2026 05:22:49 +0000 (0:00:03.362) 0:15:06.545 ******* 2026-03-25 05:23:07.267764 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 05:23:07.267774 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 05:23:07.267783 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 05:23:07.267793 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:07.267803 | orchestrator | 2026-03-25 05:23:07.267812 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:23:07.267822 | orchestrator | Wednesday 25 March 2026 05:22:51 +0000 (0:00:01.761) 0:15:08.307 ******* 2026-03-25 05:23:07.267833 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:23:07.267845 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:23:07.267855 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:23:07.267865 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:07.267875 | orchestrator | 2026-03-25 05:23:07.267885 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:23:07.267908 | orchestrator | Wednesday 25 March 2026 05:22:53 +0000 (0:00:01.999) 0:15:10.306 ******* 2026-03-25 05:23:07.267920 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:07.267932 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:07.267943 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:07.267953 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:07.267963 | orchestrator | 2026-03-25 05:23:07.267972 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:23:07.267984 | orchestrator | Wednesday 25 March 2026 05:22:54 +0000 (0:00:01.176) 0:15:11.483 ******* 2026-03-25 05:23:07.268014 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:22:46.729049', 'end': '2026-03-25 05:22:46.778604', 'delta': '0:00:00.049555', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:23:07.268063 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:22:47.639280', 'end': '2026-03-25 05:22:47.680750', 'delta': '0:00:00.041470', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:23:07.268075 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '90e526f29e10', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:22:48.273313', 'end': '2026-03-25 05:22:48.332938', 'delta': '0:00:00.059625', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90e526f29e10'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:23:07.268087 | orchestrator | 2026-03-25 05:23:07.268099 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:23:07.268110 | orchestrator | Wednesday 25 March 2026 05:22:55 +0000 (0:00:01.208) 0:15:12.692 ******* 2026-03-25 05:23:07.268121 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:23:07.268132 | orchestrator | 2026-03-25 05:23:07.268143 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:23:07.268159 | orchestrator | Wednesday 25 March 2026 05:22:57 +0000 (0:00:01.416) 0:15:14.108 ******* 2026-03-25 05:23:07.268170 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:07.268181 | orchestrator | 2026-03-25 05:23:07.268193 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:23:07.268204 | orchestrator | Wednesday 25 March 2026 05:22:58 +0000 (0:00:01.234) 0:15:15.343 ******* 2026-03-25 05:23:07.268215 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:23:07.268227 | orchestrator | 2026-03-25 05:23:07.268237 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:23:07.268249 | orchestrator | Wednesday 25 March 2026 05:22:59 +0000 (0:00:01.175) 0:15:16.519 ******* 2026-03-25 05:23:07.268259 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:23:07.268270 | orchestrator | 2026-03-25 05:23:07.268281 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:23:07.268292 | orchestrator | Wednesday 25 March 2026 05:23:01 +0000 (0:00:01.943) 0:15:18.462 ******* 2026-03-25 05:23:07.268303 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:23:07.268315 | orchestrator | 2026-03-25 05:23:07.268326 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:23:07.268337 | orchestrator | Wednesday 25 March 2026 05:23:02 +0000 (0:00:01.133) 0:15:19.596 ******* 2026-03-25 05:23:07.268357 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:07.268367 | orchestrator | 2026-03-25 05:23:07.268377 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:23:07.268387 | orchestrator | Wednesday 25 March 2026 05:23:03 +0000 (0:00:01.114) 0:15:20.710 ******* 2026-03-25 05:23:07.268396 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:07.268406 | orchestrator | 2026-03-25 05:23:07.268415 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:23:07.268425 | orchestrator | Wednesday 25 March 2026 05:23:04 +0000 (0:00:01.245) 0:15:21.957 ******* 2026-03-25 05:23:07.268434 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:07.268444 | orchestrator | 2026-03-25 05:23:07.268453 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:23:07.268463 | orchestrator | Wednesday 25 March 2026 05:23:06 +0000 (0:00:01.145) 0:15:23.102 ******* 2026-03-25 05:23:07.268472 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:07.268482 | orchestrator | 2026-03-25 05:23:07.268492 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:23:07.268507 | orchestrator | Wednesday 25 March 2026 05:23:07 +0000 (0:00:01.166) 0:15:24.268 ******* 2026-03-25 05:23:15.707465 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:15.707599 | orchestrator | 2026-03-25 05:23:15.707625 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:23:15.707644 | orchestrator | Wednesday 25 March 2026 05:23:08 +0000 (0:00:01.141) 0:15:25.410 ******* 2026-03-25 05:23:15.707659 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:15.707669 | orchestrator | 2026-03-25 05:23:15.707680 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:23:15.707691 | orchestrator | Wednesday 25 March 2026 05:23:09 +0000 (0:00:01.148) 0:15:26.558 ******* 2026-03-25 05:23:15.707708 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:15.707724 | orchestrator | 2026-03-25 05:23:15.707740 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:23:15.707757 | orchestrator | Wednesday 25 March 2026 05:23:10 +0000 (0:00:01.210) 0:15:27.768 ******* 2026-03-25 05:23:15.707773 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:15.707790 | orchestrator | 2026-03-25 05:23:15.707806 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:23:15.707824 | orchestrator | Wednesday 25 March 2026 05:23:11 +0000 (0:00:01.192) 0:15:28.961 ******* 2026-03-25 05:23:15.707842 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:15.707858 | orchestrator | 2026-03-25 05:23:15.707875 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:23:15.707892 | orchestrator | Wednesday 25 March 2026 05:23:13 +0000 (0:00:01.127) 0:15:30.088 ******* 2026-03-25 05:23:15.707912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:23:15.707934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:23:15.707951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:23:15.708017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:23:15.708069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:23:15.708088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:23:15.708130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:23:15.708154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46c5fc1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:23:15.708197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:23:15.708217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:23:15.708233 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:15.708251 | orchestrator | 2026-03-25 05:23:15.708267 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:23:15.708284 | orchestrator | Wednesday 25 March 2026 05:23:14 +0000 (0:00:01.308) 0:15:31.396 ******* 2026-03-25 05:23:15.708301 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:15.708334 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:23.365263 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:23.365386 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:23.365445 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:23.365487 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:23.365507 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:23.365545 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46c5fc1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:23.365574 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:23.365586 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:23:23.365598 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:23.365612 | orchestrator | 2026-03-25 05:23:23.365624 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:23:23.365636 | orchestrator | Wednesday 25 March 2026 05:23:15 +0000 (0:00:01.318) 0:15:32.714 ******* 2026-03-25 05:23:23.365647 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:23:23.365658 | orchestrator | 2026-03-25 05:23:23.365669 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:23:23.365680 | orchestrator | Wednesday 25 March 2026 05:23:17 +0000 (0:00:01.522) 0:15:34.237 ******* 2026-03-25 05:23:23.365690 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:23:23.365701 | orchestrator | 2026-03-25 05:23:23.365711 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:23:23.365722 | orchestrator | Wednesday 25 March 2026 05:23:18 +0000 (0:00:01.135) 0:15:35.374 ******* 2026-03-25 05:23:23.365733 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:23:23.365743 | orchestrator | 2026-03-25 05:23:23.365754 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:23:23.365764 | orchestrator | Wednesday 25 March 2026 05:23:19 +0000 (0:00:01.471) 0:15:36.845 ******* 2026-03-25 05:23:23.365775 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:23.365786 | orchestrator | 2026-03-25 05:23:23.365798 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:23:23.365810 | orchestrator | Wednesday 25 March 2026 05:23:20 +0000 (0:00:01.159) 0:15:38.005 ******* 2026-03-25 05:23:23.365823 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:23.365835 | orchestrator | 2026-03-25 05:23:23.365847 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:23:23.365859 | orchestrator | Wednesday 25 March 2026 05:23:22 +0000 (0:00:01.224) 0:15:39.230 ******* 2026-03-25 05:23:23.365871 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:23:23.365884 | orchestrator | 2026-03-25 05:23:23.365897 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:23:23.365916 | orchestrator | Wednesday 25 March 2026 05:23:23 +0000 (0:00:01.139) 0:15:40.370 ******* 2026-03-25 05:24:03.156770 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-25 05:24:03.156886 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-25 05:24:03.156903 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:24:03.156916 | orchestrator | 2026-03-25 05:24:03.156929 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:24:03.156964 | orchestrator | Wednesday 25 March 2026 05:23:25 +0000 (0:00:02.069) 0:15:42.439 ******* 2026-03-25 05:24:03.156977 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 05:24:03.156988 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 05:24:03.156999 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 05:24:03.157010 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.157021 | orchestrator | 2026-03-25 05:24:03.157032 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:24:03.157043 | orchestrator | Wednesday 25 March 2026 05:23:26 +0000 (0:00:01.187) 0:15:43.626 ******* 2026-03-25 05:24:03.157054 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.157100 | orchestrator | 2026-03-25 05:24:03.157112 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:24:03.157123 | orchestrator | Wednesday 25 March 2026 05:23:27 +0000 (0:00:01.162) 0:15:44.789 ******* 2026-03-25 05:24:03.157134 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:24:03.157145 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:24:03.157156 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:24:03.157167 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:24:03.157178 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:24:03.157189 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:24:03.157200 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:24:03.157210 | orchestrator | 2026-03-25 05:24:03.157221 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:24:03.157232 | orchestrator | Wednesday 25 March 2026 05:23:29 +0000 (0:00:01.918) 0:15:46.707 ******* 2026-03-25 05:24:03.157243 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:24:03.157254 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:24:03.157264 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:24:03.157275 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:24:03.157301 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:24:03.157314 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:24:03.157327 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:24:03.157340 | orchestrator | 2026-03-25 05:24:03.157352 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-25 05:24:03.157364 | orchestrator | Wednesday 25 March 2026 05:23:31 +0000 (0:00:02.214) 0:15:48.922 ******* 2026-03-25 05:24:03.157377 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.157389 | orchestrator | 2026-03-25 05:24:03.157401 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-25 05:24:03.157414 | orchestrator | Wednesday 25 March 2026 05:23:32 +0000 (0:00:00.918) 0:15:49.841 ******* 2026-03-25 05:24:03.157426 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.157437 | orchestrator | 2026-03-25 05:24:03.157449 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-25 05:24:03.157461 | orchestrator | Wednesday 25 March 2026 05:23:33 +0000 (0:00:00.864) 0:15:50.705 ******* 2026-03-25 05:24:03.157474 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.157487 | orchestrator | 2026-03-25 05:24:03.157499 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-25 05:24:03.157511 | orchestrator | Wednesday 25 March 2026 05:23:34 +0000 (0:00:00.817) 0:15:51.523 ******* 2026-03-25 05:24:03.157533 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.157546 | orchestrator | 2026-03-25 05:24:03.157558 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-25 05:24:03.157570 | orchestrator | Wednesday 25 March 2026 05:23:35 +0000 (0:00:00.950) 0:15:52.473 ******* 2026-03-25 05:24:03.157582 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.157595 | orchestrator | 2026-03-25 05:24:03.157606 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-25 05:24:03.157618 | orchestrator | Wednesday 25 March 2026 05:23:36 +0000 (0:00:00.791) 0:15:53.264 ******* 2026-03-25 05:24:03.157630 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 05:24:03.157642 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 05:24:03.157654 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 05:24:03.157666 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.157679 | orchestrator | 2026-03-25 05:24:03.157689 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-25 05:24:03.157700 | orchestrator | Wednesday 25 March 2026 05:23:37 +0000 (0:00:01.107) 0:15:54.372 ******* 2026-03-25 05:24:03.157711 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-25 05:24:03.157722 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-25 05:24:03.157749 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-25 05:24:03.157760 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-25 05:24:03.157771 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-25 05:24:03.157782 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-25 05:24:03.157793 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.157803 | orchestrator | 2026-03-25 05:24:03.157814 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-25 05:24:03.157825 | orchestrator | Wednesday 25 March 2026 05:23:39 +0000 (0:00:01.733) 0:15:56.105 ******* 2026-03-25 05:24:03.157836 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:24:03.157847 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:24:03.157858 | orchestrator | 2026-03-25 05:24:03.157868 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-25 05:24:03.157879 | orchestrator | Wednesday 25 March 2026 05:23:42 +0000 (0:00:03.166) 0:15:59.272 ******* 2026-03-25 05:24:03.157890 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:24:03.157901 | orchestrator | 2026-03-25 05:24:03.157913 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:24:03.157924 | orchestrator | Wednesday 25 March 2026 05:23:44 +0000 (0:00:02.212) 0:16:01.484 ******* 2026-03-25 05:24:03.157935 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-25 05:24:03.157947 | orchestrator | 2026-03-25 05:24:03.157957 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 05:24:03.157968 | orchestrator | Wednesday 25 March 2026 05:23:45 +0000 (0:00:01.298) 0:16:02.783 ******* 2026-03-25 05:24:03.157979 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-25 05:24:03.157990 | orchestrator | 2026-03-25 05:24:03.158001 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 05:24:03.158012 | orchestrator | Wednesday 25 March 2026 05:23:46 +0000 (0:00:01.136) 0:16:03.919 ******* 2026-03-25 05:24:03.158119 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:03.158131 | orchestrator | 2026-03-25 05:24:03.158142 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 05:24:03.158186 | orchestrator | Wednesday 25 March 2026 05:23:48 +0000 (0:00:01.591) 0:16:05.511 ******* 2026-03-25 05:24:03.158207 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.158218 | orchestrator | 2026-03-25 05:24:03.158228 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 05:24:03.158239 | orchestrator | Wednesday 25 March 2026 05:23:49 +0000 (0:00:01.134) 0:16:06.645 ******* 2026-03-25 05:24:03.158250 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.158261 | orchestrator | 2026-03-25 05:24:03.158272 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 05:24:03.158289 | orchestrator | Wednesday 25 March 2026 05:23:50 +0000 (0:00:01.250) 0:16:07.896 ******* 2026-03-25 05:24:03.158300 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.158311 | orchestrator | 2026-03-25 05:24:03.158322 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 05:24:03.158333 | orchestrator | Wednesday 25 March 2026 05:23:51 +0000 (0:00:01.100) 0:16:08.996 ******* 2026-03-25 05:24:03.158343 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:03.158354 | orchestrator | 2026-03-25 05:24:03.158365 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 05:24:03.158375 | orchestrator | Wednesday 25 March 2026 05:23:53 +0000 (0:00:01.568) 0:16:10.565 ******* 2026-03-25 05:24:03.158386 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.158397 | orchestrator | 2026-03-25 05:24:03.158408 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 05:24:03.158418 | orchestrator | Wednesday 25 March 2026 05:23:54 +0000 (0:00:01.174) 0:16:11.739 ******* 2026-03-25 05:24:03.158429 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.158440 | orchestrator | 2026-03-25 05:24:03.158450 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 05:24:03.158461 | orchestrator | Wednesday 25 March 2026 05:23:55 +0000 (0:00:01.192) 0:16:12.932 ******* 2026-03-25 05:24:03.158472 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:03.158483 | orchestrator | 2026-03-25 05:24:03.158494 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 05:24:03.158504 | orchestrator | Wednesday 25 March 2026 05:23:57 +0000 (0:00:01.598) 0:16:14.531 ******* 2026-03-25 05:24:03.158515 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:03.158526 | orchestrator | 2026-03-25 05:24:03.158537 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 05:24:03.158548 | orchestrator | Wednesday 25 March 2026 05:23:59 +0000 (0:00:01.634) 0:16:16.166 ******* 2026-03-25 05:24:03.158558 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.158569 | orchestrator | 2026-03-25 05:24:03.158580 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:24:03.158591 | orchestrator | Wednesday 25 March 2026 05:23:59 +0000 (0:00:00.778) 0:16:16.944 ******* 2026-03-25 05:24:03.158601 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:03.158612 | orchestrator | 2026-03-25 05:24:03.158623 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:24:03.158634 | orchestrator | Wednesday 25 March 2026 05:24:00 +0000 (0:00:00.881) 0:16:17.826 ******* 2026-03-25 05:24:03.158645 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.158655 | orchestrator | 2026-03-25 05:24:03.158666 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:24:03.158682 | orchestrator | Wednesday 25 March 2026 05:24:01 +0000 (0:00:00.772) 0:16:18.599 ******* 2026-03-25 05:24:03.158699 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:03.158717 | orchestrator | 2026-03-25 05:24:03.158734 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:24:03.158745 | orchestrator | Wednesday 25 March 2026 05:24:02 +0000 (0:00:00.782) 0:16:19.382 ******* 2026-03-25 05:24:03.158766 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.341557 | orchestrator | 2026-03-25 05:24:45.341675 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:24:45.341692 | orchestrator | Wednesday 25 March 2026 05:24:03 +0000 (0:00:00.779) 0:16:20.161 ******* 2026-03-25 05:24:45.341726 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.341739 | orchestrator | 2026-03-25 05:24:45.341750 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:24:45.341761 | orchestrator | Wednesday 25 March 2026 05:24:03 +0000 (0:00:00.803) 0:16:20.965 ******* 2026-03-25 05:24:45.341772 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.341783 | orchestrator | 2026-03-25 05:24:45.341794 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:24:45.341804 | orchestrator | Wednesday 25 March 2026 05:24:04 +0000 (0:00:00.775) 0:16:21.741 ******* 2026-03-25 05:24:45.341815 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:45.341827 | orchestrator | 2026-03-25 05:24:45.341838 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:24:45.341849 | orchestrator | Wednesday 25 March 2026 05:24:05 +0000 (0:00:00.875) 0:16:22.617 ******* 2026-03-25 05:24:45.341860 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:45.341870 | orchestrator | 2026-03-25 05:24:45.341881 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:24:45.341892 | orchestrator | Wednesday 25 March 2026 05:24:06 +0000 (0:00:00.808) 0:16:23.425 ******* 2026-03-25 05:24:45.341903 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:45.341913 | orchestrator | 2026-03-25 05:24:45.341925 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:24:45.341936 | orchestrator | Wednesday 25 March 2026 05:24:07 +0000 (0:00:00.809) 0:16:24.235 ******* 2026-03-25 05:24:45.341947 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.341958 | orchestrator | 2026-03-25 05:24:45.341969 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:24:45.341979 | orchestrator | Wednesday 25 March 2026 05:24:08 +0000 (0:00:00.812) 0:16:25.047 ******* 2026-03-25 05:24:45.341990 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342001 | orchestrator | 2026-03-25 05:24:45.342012 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:24:45.342114 | orchestrator | Wednesday 25 March 2026 05:24:08 +0000 (0:00:00.829) 0:16:25.877 ******* 2026-03-25 05:24:45.342128 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342140 | orchestrator | 2026-03-25 05:24:45.342152 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:24:45.342165 | orchestrator | Wednesday 25 March 2026 05:24:09 +0000 (0:00:00.887) 0:16:26.765 ******* 2026-03-25 05:24:45.342177 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342197 | orchestrator | 2026-03-25 05:24:45.342212 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:24:45.342224 | orchestrator | Wednesday 25 March 2026 05:24:10 +0000 (0:00:00.825) 0:16:27.591 ******* 2026-03-25 05:24:45.342236 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342248 | orchestrator | 2026-03-25 05:24:45.342274 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:24:45.342287 | orchestrator | Wednesday 25 March 2026 05:24:11 +0000 (0:00:00.776) 0:16:28.367 ******* 2026-03-25 05:24:45.342300 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342312 | orchestrator | 2026-03-25 05:24:45.342324 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:24:45.342336 | orchestrator | Wednesday 25 March 2026 05:24:12 +0000 (0:00:00.857) 0:16:29.224 ******* 2026-03-25 05:24:45.342348 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342361 | orchestrator | 2026-03-25 05:24:45.342374 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:24:45.342387 | orchestrator | Wednesday 25 March 2026 05:24:12 +0000 (0:00:00.769) 0:16:29.994 ******* 2026-03-25 05:24:45.342400 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342412 | orchestrator | 2026-03-25 05:24:45.342424 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:24:45.342436 | orchestrator | Wednesday 25 March 2026 05:24:13 +0000 (0:00:00.772) 0:16:30.767 ******* 2026-03-25 05:24:45.342458 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342470 | orchestrator | 2026-03-25 05:24:45.342481 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:24:45.342492 | orchestrator | Wednesday 25 March 2026 05:24:14 +0000 (0:00:00.815) 0:16:31.582 ******* 2026-03-25 05:24:45.342502 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342513 | orchestrator | 2026-03-25 05:24:45.342524 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:24:45.342535 | orchestrator | Wednesday 25 March 2026 05:24:15 +0000 (0:00:00.824) 0:16:32.406 ******* 2026-03-25 05:24:45.342545 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342556 | orchestrator | 2026-03-25 05:24:45.342567 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:24:45.342577 | orchestrator | Wednesday 25 March 2026 05:24:16 +0000 (0:00:00.775) 0:16:33.182 ******* 2026-03-25 05:24:45.342588 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342599 | orchestrator | 2026-03-25 05:24:45.342609 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:24:45.342620 | orchestrator | Wednesday 25 March 2026 05:24:16 +0000 (0:00:00.760) 0:16:33.942 ******* 2026-03-25 05:24:45.342631 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:45.342641 | orchestrator | 2026-03-25 05:24:45.342652 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:24:45.342663 | orchestrator | Wednesday 25 March 2026 05:24:18 +0000 (0:00:01.638) 0:16:35.581 ******* 2026-03-25 05:24:45.342674 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:45.342684 | orchestrator | 2026-03-25 05:24:45.342695 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:24:45.342706 | orchestrator | Wednesday 25 March 2026 05:24:21 +0000 (0:00:03.091) 0:16:38.673 ******* 2026-03-25 05:24:45.342717 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-25 05:24:45.342728 | orchestrator | 2026-03-25 05:24:45.342758 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 05:24:45.342769 | orchestrator | Wednesday 25 March 2026 05:24:22 +0000 (0:00:01.332) 0:16:40.005 ******* 2026-03-25 05:24:45.342780 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342791 | orchestrator | 2026-03-25 05:24:45.342802 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 05:24:45.342813 | orchestrator | Wednesday 25 March 2026 05:24:24 +0000 (0:00:01.132) 0:16:41.138 ******* 2026-03-25 05:24:45.342823 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342834 | orchestrator | 2026-03-25 05:24:45.342845 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 05:24:45.342856 | orchestrator | Wednesday 25 March 2026 05:24:25 +0000 (0:00:01.190) 0:16:42.329 ******* 2026-03-25 05:24:45.342867 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 05:24:45.342878 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 05:24:45.342889 | orchestrator | 2026-03-25 05:24:45.342900 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 05:24:45.342910 | orchestrator | Wednesday 25 March 2026 05:24:27 +0000 (0:00:01.864) 0:16:44.194 ******* 2026-03-25 05:24:45.342921 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:45.342932 | orchestrator | 2026-03-25 05:24:45.342943 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 05:24:45.342953 | orchestrator | Wednesday 25 March 2026 05:24:28 +0000 (0:00:01.473) 0:16:45.668 ******* 2026-03-25 05:24:45.342964 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.342975 | orchestrator | 2026-03-25 05:24:45.342986 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 05:24:45.342997 | orchestrator | Wednesday 25 March 2026 05:24:29 +0000 (0:00:01.197) 0:16:46.865 ******* 2026-03-25 05:24:45.343008 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.343026 | orchestrator | 2026-03-25 05:24:45.343037 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:24:45.343048 | orchestrator | Wednesday 25 March 2026 05:24:30 +0000 (0:00:00.818) 0:16:47.684 ******* 2026-03-25 05:24:45.343059 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.343070 | orchestrator | 2026-03-25 05:24:45.343081 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:24:45.343143 | orchestrator | Wednesday 25 March 2026 05:24:31 +0000 (0:00:00.803) 0:16:48.488 ******* 2026-03-25 05:24:45.343155 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-25 05:24:45.343165 | orchestrator | 2026-03-25 05:24:45.343176 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 05:24:45.343187 | orchestrator | Wednesday 25 March 2026 05:24:32 +0000 (0:00:01.106) 0:16:49.595 ******* 2026-03-25 05:24:45.343198 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:45.343209 | orchestrator | 2026-03-25 05:24:45.343225 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 05:24:45.343236 | orchestrator | Wednesday 25 March 2026 05:24:34 +0000 (0:00:01.739) 0:16:51.334 ******* 2026-03-25 05:24:45.343247 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 05:24:45.343258 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 05:24:45.343269 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 05:24:45.343279 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.343290 | orchestrator | 2026-03-25 05:24:45.343301 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 05:24:45.343312 | orchestrator | Wednesday 25 March 2026 05:24:35 +0000 (0:00:01.161) 0:16:52.495 ******* 2026-03-25 05:24:45.343322 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.343333 | orchestrator | 2026-03-25 05:24:45.343344 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 05:24:45.343355 | orchestrator | Wednesday 25 March 2026 05:24:36 +0000 (0:00:01.132) 0:16:53.627 ******* 2026-03-25 05:24:45.343365 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.343376 | orchestrator | 2026-03-25 05:24:45.343387 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 05:24:45.343398 | orchestrator | Wednesday 25 March 2026 05:24:37 +0000 (0:00:01.273) 0:16:54.901 ******* 2026-03-25 05:24:45.343408 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.343419 | orchestrator | 2026-03-25 05:24:45.343430 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 05:24:45.343441 | orchestrator | Wednesday 25 March 2026 05:24:39 +0000 (0:00:01.282) 0:16:56.183 ******* 2026-03-25 05:24:45.343451 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.343462 | orchestrator | 2026-03-25 05:24:45.343473 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 05:24:45.343484 | orchestrator | Wednesday 25 March 2026 05:24:40 +0000 (0:00:01.160) 0:16:57.344 ******* 2026-03-25 05:24:45.343494 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:24:45.343505 | orchestrator | 2026-03-25 05:24:45.343516 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:24:45.343526 | orchestrator | Wednesday 25 March 2026 05:24:41 +0000 (0:00:00.778) 0:16:58.123 ******* 2026-03-25 05:24:45.343537 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:45.343548 | orchestrator | 2026-03-25 05:24:45.343558 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:24:45.343569 | orchestrator | Wednesday 25 March 2026 05:24:43 +0000 (0:00:02.256) 0:17:00.380 ******* 2026-03-25 05:24:45.343580 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:24:45.343590 | orchestrator | 2026-03-25 05:24:45.343601 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:24:45.343612 | orchestrator | Wednesday 25 March 2026 05:24:44 +0000 (0:00:00.773) 0:17:01.153 ******* 2026-03-25 05:24:45.343630 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-25 05:24:45.343641 | orchestrator | 2026-03-25 05:24:45.343658 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 05:25:22.498780 | orchestrator | Wednesday 25 March 2026 05:24:45 +0000 (0:00:01.189) 0:17:02.343 ******* 2026-03-25 05:25:22.498898 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.498915 | orchestrator | 2026-03-25 05:25:22.498928 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 05:25:22.498939 | orchestrator | Wednesday 25 March 2026 05:24:46 +0000 (0:00:01.165) 0:17:03.508 ******* 2026-03-25 05:25:22.498950 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.498961 | orchestrator | 2026-03-25 05:25:22.498972 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 05:25:22.498983 | orchestrator | Wednesday 25 March 2026 05:24:47 +0000 (0:00:01.201) 0:17:04.710 ******* 2026-03-25 05:25:22.498994 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499005 | orchestrator | 2026-03-25 05:25:22.499015 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 05:25:22.499026 | orchestrator | Wednesday 25 March 2026 05:24:48 +0000 (0:00:01.217) 0:17:05.927 ******* 2026-03-25 05:25:22.499037 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499048 | orchestrator | 2026-03-25 05:25:22.499059 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 05:25:22.499069 | orchestrator | Wednesday 25 March 2026 05:24:50 +0000 (0:00:01.165) 0:17:07.093 ******* 2026-03-25 05:25:22.499080 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499091 | orchestrator | 2026-03-25 05:25:22.499101 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 05:25:22.499112 | orchestrator | Wednesday 25 March 2026 05:24:51 +0000 (0:00:01.247) 0:17:08.341 ******* 2026-03-25 05:25:22.499183 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499195 | orchestrator | 2026-03-25 05:25:22.499205 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 05:25:22.499216 | orchestrator | Wednesday 25 March 2026 05:24:52 +0000 (0:00:01.161) 0:17:09.503 ******* 2026-03-25 05:25:22.499227 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499238 | orchestrator | 2026-03-25 05:25:22.499248 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 05:25:22.499259 | orchestrator | Wednesday 25 March 2026 05:24:53 +0000 (0:00:01.168) 0:17:10.671 ******* 2026-03-25 05:25:22.499270 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499281 | orchestrator | 2026-03-25 05:25:22.499292 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 05:25:22.499303 | orchestrator | Wednesday 25 March 2026 05:24:54 +0000 (0:00:01.209) 0:17:11.881 ******* 2026-03-25 05:25:22.499314 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:25:22.499387 | orchestrator | 2026-03-25 05:25:22.499402 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:25:22.499413 | orchestrator | Wednesday 25 March 2026 05:24:55 +0000 (0:00:00.875) 0:17:12.757 ******* 2026-03-25 05:25:22.499442 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-25 05:25:22.499454 | orchestrator | 2026-03-25 05:25:22.499465 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 05:25:22.499476 | orchestrator | Wednesday 25 March 2026 05:24:56 +0000 (0:00:01.175) 0:17:13.932 ******* 2026-03-25 05:25:22.499487 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-25 05:25:22.499498 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-25 05:25:22.499509 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-25 05:25:22.499520 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-25 05:25:22.499531 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-25 05:25:22.499563 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-25 05:25:22.499575 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-25 05:25:22.499586 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-25 05:25:22.499596 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 05:25:22.499607 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 05:25:22.499618 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 05:25:22.499628 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 05:25:22.499639 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 05:25:22.499650 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 05:25:22.499660 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-25 05:25:22.499671 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-25 05:25:22.499682 | orchestrator | 2026-03-25 05:25:22.499692 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:25:22.499703 | orchestrator | Wednesday 25 March 2026 05:25:03 +0000 (0:00:06.345) 0:17:20.278 ******* 2026-03-25 05:25:22.499714 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499725 | orchestrator | 2026-03-25 05:25:22.499735 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:25:22.499746 | orchestrator | Wednesday 25 March 2026 05:25:04 +0000 (0:00:00.787) 0:17:21.065 ******* 2026-03-25 05:25:22.499756 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499767 | orchestrator | 2026-03-25 05:25:22.499778 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:25:22.499788 | orchestrator | Wednesday 25 March 2026 05:25:04 +0000 (0:00:00.819) 0:17:21.885 ******* 2026-03-25 05:25:22.499799 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499810 | orchestrator | 2026-03-25 05:25:22.499820 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:25:22.499831 | orchestrator | Wednesday 25 March 2026 05:25:05 +0000 (0:00:00.805) 0:17:22.690 ******* 2026-03-25 05:25:22.499841 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499852 | orchestrator | 2026-03-25 05:25:22.499863 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:25:22.499892 | orchestrator | Wednesday 25 March 2026 05:25:06 +0000 (0:00:00.831) 0:17:23.522 ******* 2026-03-25 05:25:22.499904 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499915 | orchestrator | 2026-03-25 05:25:22.499926 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:25:22.499936 | orchestrator | Wednesday 25 March 2026 05:25:07 +0000 (0:00:00.807) 0:17:24.330 ******* 2026-03-25 05:25:22.499947 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.499958 | orchestrator | 2026-03-25 05:25:22.499968 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:25:22.499980 | orchestrator | Wednesday 25 March 2026 05:25:08 +0000 (0:00:00.805) 0:17:25.135 ******* 2026-03-25 05:25:22.499990 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500001 | orchestrator | 2026-03-25 05:25:22.500012 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:25:22.500023 | orchestrator | Wednesday 25 March 2026 05:25:08 +0000 (0:00:00.782) 0:17:25.917 ******* 2026-03-25 05:25:22.500033 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500044 | orchestrator | 2026-03-25 05:25:22.500055 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:25:22.500065 | orchestrator | Wednesday 25 March 2026 05:25:09 +0000 (0:00:00.819) 0:17:26.737 ******* 2026-03-25 05:25:22.500076 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500087 | orchestrator | 2026-03-25 05:25:22.500098 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:25:22.500140 | orchestrator | Wednesday 25 March 2026 05:25:10 +0000 (0:00:00.804) 0:17:27.542 ******* 2026-03-25 05:25:22.500154 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500165 | orchestrator | 2026-03-25 05:25:22.500176 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:25:22.500187 | orchestrator | Wednesday 25 March 2026 05:25:11 +0000 (0:00:00.780) 0:17:28.322 ******* 2026-03-25 05:25:22.500197 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500208 | orchestrator | 2026-03-25 05:25:22.500219 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:25:22.500230 | orchestrator | Wednesday 25 March 2026 05:25:12 +0000 (0:00:00.846) 0:17:29.169 ******* 2026-03-25 05:25:22.500241 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500251 | orchestrator | 2026-03-25 05:25:22.500262 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:25:22.500273 | orchestrator | Wednesday 25 March 2026 05:25:12 +0000 (0:00:00.755) 0:17:29.925 ******* 2026-03-25 05:25:22.500284 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500295 | orchestrator | 2026-03-25 05:25:22.500305 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:25:22.500316 | orchestrator | Wednesday 25 March 2026 05:25:13 +0000 (0:00:00.905) 0:17:30.830 ******* 2026-03-25 05:25:22.500333 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500344 | orchestrator | 2026-03-25 05:25:22.500355 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:25:22.500366 | orchestrator | Wednesday 25 March 2026 05:25:14 +0000 (0:00:00.794) 0:17:31.624 ******* 2026-03-25 05:25:22.500376 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500387 | orchestrator | 2026-03-25 05:25:22.500398 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:25:22.500409 | orchestrator | Wednesday 25 March 2026 05:25:15 +0000 (0:00:00.918) 0:17:32.543 ******* 2026-03-25 05:25:22.500420 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500431 | orchestrator | 2026-03-25 05:25:22.500441 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:25:22.500452 | orchestrator | Wednesday 25 March 2026 05:25:16 +0000 (0:00:00.784) 0:17:33.328 ******* 2026-03-25 05:25:22.500464 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500482 | orchestrator | 2026-03-25 05:25:22.500505 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:25:22.500533 | orchestrator | Wednesday 25 March 2026 05:25:17 +0000 (0:00:00.772) 0:17:34.100 ******* 2026-03-25 05:25:22.500550 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500567 | orchestrator | 2026-03-25 05:25:22.500584 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:25:22.500601 | orchestrator | Wednesday 25 March 2026 05:25:17 +0000 (0:00:00.767) 0:17:34.871 ******* 2026-03-25 05:25:22.500616 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500632 | orchestrator | 2026-03-25 05:25:22.500650 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:25:22.500666 | orchestrator | Wednesday 25 March 2026 05:25:18 +0000 (0:00:00.851) 0:17:35.723 ******* 2026-03-25 05:25:22.500684 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500701 | orchestrator | 2026-03-25 05:25:22.500719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:25:22.500738 | orchestrator | Wednesday 25 March 2026 05:25:19 +0000 (0:00:00.835) 0:17:36.559 ******* 2026-03-25 05:25:22.500757 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500777 | orchestrator | 2026-03-25 05:25:22.500796 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:25:22.500808 | orchestrator | Wednesday 25 March 2026 05:25:20 +0000 (0:00:00.764) 0:17:37.323 ******* 2026-03-25 05:25:22.500818 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 05:25:22.500840 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 05:25:22.500852 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 05:25:22.500870 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:25:22.500886 | orchestrator | 2026-03-25 05:25:22.500904 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:25:22.500922 | orchestrator | Wednesday 25 March 2026 05:25:21 +0000 (0:00:01.070) 0:17:38.394 ******* 2026-03-25 05:25:22.500941 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 05:25:22.500973 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 05:26:50.787533 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 05:26:50.787684 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:26:50.787713 | orchestrator | 2026-03-25 05:26:50.787735 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:26:50.787757 | orchestrator | Wednesday 25 March 2026 05:25:22 +0000 (0:00:01.106) 0:17:39.500 ******* 2026-03-25 05:26:50.787775 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 05:26:50.787786 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 05:26:50.787797 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 05:26:50.787808 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:26:50.787819 | orchestrator | 2026-03-25 05:26:50.787830 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:26:50.787841 | orchestrator | Wednesday 25 March 2026 05:25:23 +0000 (0:00:01.093) 0:17:40.593 ******* 2026-03-25 05:26:50.787851 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:26:50.787862 | orchestrator | 2026-03-25 05:26:50.787874 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:26:50.787885 | orchestrator | Wednesday 25 March 2026 05:25:24 +0000 (0:00:00.845) 0:17:41.439 ******* 2026-03-25 05:26:50.787896 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-25 05:26:50.787907 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:26:50.787918 | orchestrator | 2026-03-25 05:26:50.787928 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:26:50.787939 | orchestrator | Wednesday 25 March 2026 05:25:25 +0000 (0:00:00.909) 0:17:42.349 ******* 2026-03-25 05:26:50.787950 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.787961 | orchestrator | 2026-03-25 05:26:50.787971 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:26:50.787982 | orchestrator | Wednesday 25 March 2026 05:25:26 +0000 (0:00:01.448) 0:17:43.797 ******* 2026-03-25 05:26:50.787993 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788003 | orchestrator | 2026-03-25 05:26:50.788014 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-25 05:26:50.788025 | orchestrator | Wednesday 25 March 2026 05:25:27 +0000 (0:00:00.821) 0:17:44.618 ******* 2026-03-25 05:26:50.788035 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-03-25 05:26:50.788049 | orchestrator | 2026-03-25 05:26:50.788061 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-25 05:26:50.788073 | orchestrator | Wednesday 25 March 2026 05:25:28 +0000 (0:00:01.185) 0:17:45.804 ******* 2026-03-25 05:26:50.788085 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788097 | orchestrator | 2026-03-25 05:26:50.788109 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-25 05:26:50.788138 | orchestrator | Wednesday 25 March 2026 05:25:32 +0000 (0:00:03.962) 0:17:49.767 ******* 2026-03-25 05:26:50.788152 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:26:50.788165 | orchestrator | 2026-03-25 05:26:50.788177 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-25 05:26:50.788227 | orchestrator | Wednesday 25 March 2026 05:25:33 +0000 (0:00:01.171) 0:17:50.939 ******* 2026-03-25 05:26:50.788248 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788302 | orchestrator | 2026-03-25 05:26:50.788319 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-25 05:26:50.788332 | orchestrator | Wednesday 25 March 2026 05:25:35 +0000 (0:00:01.150) 0:17:52.090 ******* 2026-03-25 05:26:50.788344 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788357 | orchestrator | 2026-03-25 05:26:50.788369 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-25 05:26:50.788381 | orchestrator | Wednesday 25 March 2026 05:25:36 +0000 (0:00:01.181) 0:17:53.271 ******* 2026-03-25 05:26:50.788394 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:26:50.788406 | orchestrator | 2026-03-25 05:26:50.788416 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-25 05:26:50.788427 | orchestrator | Wednesday 25 March 2026 05:25:38 +0000 (0:00:02.124) 0:17:55.396 ******* 2026-03-25 05:26:50.788437 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788448 | orchestrator | 2026-03-25 05:26:50.788458 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-25 05:26:50.788469 | orchestrator | Wednesday 25 March 2026 05:25:40 +0000 (0:00:01.679) 0:17:57.076 ******* 2026-03-25 05:26:50.788479 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788490 | orchestrator | 2026-03-25 05:26:50.788500 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-25 05:26:50.788511 | orchestrator | Wednesday 25 March 2026 05:25:41 +0000 (0:00:01.475) 0:17:58.552 ******* 2026-03-25 05:26:50.788522 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788532 | orchestrator | 2026-03-25 05:26:50.788543 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-25 05:26:50.788553 | orchestrator | Wednesday 25 March 2026 05:25:43 +0000 (0:00:01.592) 0:18:00.145 ******* 2026-03-25 05:26:50.788564 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:26:50.788575 | orchestrator | 2026-03-25 05:26:50.788585 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-25 05:26:50.788596 | orchestrator | Wednesday 25 March 2026 05:25:44 +0000 (0:00:01.569) 0:18:01.714 ******* 2026-03-25 05:26:50.788606 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:26:50.788617 | orchestrator | 2026-03-25 05:26:50.788627 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-25 05:26:50.788638 | orchestrator | Wednesday 25 March 2026 05:25:46 +0000 (0:00:01.581) 0:18:03.295 ******* 2026-03-25 05:26:50.788649 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:26:50.788659 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-25 05:26:50.788670 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-25 05:26:50.788682 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-25 05:26:50.788701 | orchestrator | 2026-03-25 05:26:50.788745 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-25 05:26:50.788767 | orchestrator | Wednesday 25 March 2026 05:25:50 +0000 (0:00:04.076) 0:18:07.371 ******* 2026-03-25 05:26:50.788786 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:26:50.788805 | orchestrator | 2026-03-25 05:26:50.788820 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-25 05:26:50.788831 | orchestrator | Wednesday 25 March 2026 05:25:52 +0000 (0:00:02.121) 0:18:09.493 ******* 2026-03-25 05:26:50.788842 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788852 | orchestrator | 2026-03-25 05:26:50.788863 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-25 05:26:50.788874 | orchestrator | Wednesday 25 March 2026 05:25:53 +0000 (0:00:01.156) 0:18:10.649 ******* 2026-03-25 05:26:50.788884 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788895 | orchestrator | 2026-03-25 05:26:50.788906 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-25 05:26:50.788916 | orchestrator | Wednesday 25 March 2026 05:25:54 +0000 (0:00:01.211) 0:18:11.860 ******* 2026-03-25 05:26:50.788927 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788948 | orchestrator | 2026-03-25 05:26:50.788958 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-25 05:26:50.788969 | orchestrator | Wednesday 25 March 2026 05:25:56 +0000 (0:00:01.884) 0:18:13.745 ******* 2026-03-25 05:26:50.788980 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.788990 | orchestrator | 2026-03-25 05:26:50.789001 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-25 05:26:50.789012 | orchestrator | Wednesday 25 March 2026 05:25:58 +0000 (0:00:01.664) 0:18:15.409 ******* 2026-03-25 05:26:50.789022 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:26:50.789033 | orchestrator | 2026-03-25 05:26:50.789044 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-25 05:26:50.789055 | orchestrator | Wednesday 25 March 2026 05:25:59 +0000 (0:00:00.841) 0:18:16.251 ******* 2026-03-25 05:26:50.789066 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-03-25 05:26:50.789076 | orchestrator | 2026-03-25 05:26:50.789087 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-25 05:26:50.789097 | orchestrator | Wednesday 25 March 2026 05:26:00 +0000 (0:00:01.155) 0:18:17.406 ******* 2026-03-25 05:26:50.789108 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:26:50.789119 | orchestrator | 2026-03-25 05:26:50.789129 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-25 05:26:50.789140 | orchestrator | Wednesday 25 March 2026 05:26:01 +0000 (0:00:01.104) 0:18:18.511 ******* 2026-03-25 05:26:50.789150 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:26:50.789161 | orchestrator | 2026-03-25 05:26:50.789172 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-25 05:26:50.789218 | orchestrator | Wednesday 25 March 2026 05:26:02 +0000 (0:00:01.147) 0:18:19.659 ******* 2026-03-25 05:26:50.789231 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-03-25 05:26:50.789242 | orchestrator | 2026-03-25 05:26:50.789253 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-25 05:26:50.789264 | orchestrator | Wednesday 25 March 2026 05:26:03 +0000 (0:00:01.152) 0:18:20.812 ******* 2026-03-25 05:26:50.789274 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.789285 | orchestrator | 2026-03-25 05:26:50.789295 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-25 05:26:50.789306 | orchestrator | Wednesday 25 March 2026 05:26:06 +0000 (0:00:02.636) 0:18:23.448 ******* 2026-03-25 05:26:50.789317 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.789328 | orchestrator | 2026-03-25 05:26:50.789338 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-25 05:26:50.789349 | orchestrator | Wednesday 25 March 2026 05:26:08 +0000 (0:00:01.981) 0:18:25.430 ******* 2026-03-25 05:26:50.789360 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.789370 | orchestrator | 2026-03-25 05:26:50.789381 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-25 05:26:50.789392 | orchestrator | Wednesday 25 March 2026 05:26:10 +0000 (0:00:02.380) 0:18:27.810 ******* 2026-03-25 05:26:50.789403 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:26:50.789413 | orchestrator | 2026-03-25 05:26:50.789424 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-25 05:26:50.789435 | orchestrator | Wednesday 25 March 2026 05:26:13 +0000 (0:00:02.852) 0:18:30.663 ******* 2026-03-25 05:26:50.789446 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-03-25 05:26:50.789456 | orchestrator | 2026-03-25 05:26:50.789467 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-25 05:26:50.789477 | orchestrator | Wednesday 25 March 2026 05:26:14 +0000 (0:00:01.122) 0:18:31.786 ******* 2026-03-25 05:26:50.789488 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-25 05:26:50.789499 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.789517 | orchestrator | 2026-03-25 05:26:50.789528 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-25 05:26:50.789538 | orchestrator | Wednesday 25 March 2026 05:26:37 +0000 (0:00:23.000) 0:18:54.787 ******* 2026-03-25 05:26:50.789549 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:26:50.789560 | orchestrator | 2026-03-25 05:26:50.789570 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-25 05:26:50.789581 | orchestrator | Wednesday 25 March 2026 05:26:40 +0000 (0:00:02.600) 0:18:57.388 ******* 2026-03-25 05:26:50.789592 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:26:50.789602 | orchestrator | 2026-03-25 05:26:50.789613 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-25 05:26:50.789624 | orchestrator | Wednesday 25 March 2026 05:26:41 +0000 (0:00:00.830) 0:18:58.218 ******* 2026-03-25 05:26:50.789646 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-25 05:27:27.449142 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-25 05:27:27.449337 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-25 05:27:27.449358 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-25 05:27:27.449373 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-25 05:27:27.449404 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__fe6f3167ab81d5784c37329f8a3bb9b2d91cf741'}])  2026-03-25 05:27:27.449418 | orchestrator | 2026-03-25 05:27:27.449431 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-25 05:27:27.449443 | orchestrator | Wednesday 25 March 2026 05:26:50 +0000 (0:00:09.572) 0:19:07.790 ******* 2026-03-25 05:27:27.449454 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:27:27.449467 | orchestrator | 2026-03-25 05:27:27.449478 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:27:27.449489 | orchestrator | Wednesday 25 March 2026 05:26:52 +0000 (0:00:02.197) 0:19:09.988 ******* 2026-03-25 05:27:27.449500 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:27:27.449511 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-25 05:27:27.449545 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-25 05:27:27.449557 | orchestrator | 2026-03-25 05:27:27.449568 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:27:27.449579 | orchestrator | Wednesday 25 March 2026 05:26:54 +0000 (0:00:01.877) 0:19:11.865 ******* 2026-03-25 05:27:27.449590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 05:27:27.449601 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 05:27:27.449612 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 05:27:27.449622 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:27:27.449633 | orchestrator | 2026-03-25 05:27:27.449644 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-25 05:27:27.449655 | orchestrator | Wednesday 25 March 2026 05:26:55 +0000 (0:00:01.141) 0:19:13.007 ******* 2026-03-25 05:27:27.449666 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:27:27.449677 | orchestrator | 2026-03-25 05:27:27.449690 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-25 05:27:27.449703 | orchestrator | Wednesday 25 March 2026 05:26:56 +0000 (0:00:00.812) 0:19:13.820 ******* 2026-03-25 05:27:27.449715 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:27:27.449729 | orchestrator | 2026-03-25 05:27:27.449742 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-03-25 05:27:27.449754 | orchestrator | 2026-03-25 05:27:27.449766 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-03-25 05:27:27.449778 | orchestrator | Wednesday 25 March 2026 05:27:00 +0000 (0:00:03.350) 0:19:17.170 ******* 2026-03-25 05:27:27.449790 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:27:27.449802 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:27:27.449814 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:27:27.449826 | orchestrator | 2026-03-25 05:27:27.449838 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-25 05:27:27.449850 | orchestrator | 2026-03-25 05:27:27.449862 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-25 05:27:27.449873 | orchestrator | Wednesday 25 March 2026 05:27:01 +0000 (0:00:01.637) 0:19:18.808 ******* 2026-03-25 05:27:27.449886 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.449898 | orchestrator | 2026-03-25 05:27:27.449910 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:27:27.449941 | orchestrator | Wednesday 25 March 2026 05:27:02 +0000 (0:00:01.187) 0:19:19.996 ******* 2026-03-25 05:27:27.449954 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.449967 | orchestrator | 2026-03-25 05:27:27.449979 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:27:27.449992 | orchestrator | Wednesday 25 March 2026 05:27:04 +0000 (0:00:01.134) 0:19:21.130 ******* 2026-03-25 05:27:27.450004 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450070 | orchestrator | 2026-03-25 05:27:27.450083 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:27:27.450094 | orchestrator | Wednesday 25 March 2026 05:27:05 +0000 (0:00:01.113) 0:19:22.244 ******* 2026-03-25 05:27:27.450105 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450116 | orchestrator | 2026-03-25 05:27:27.450127 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:27:27.450138 | orchestrator | Wednesday 25 March 2026 05:27:06 +0000 (0:00:01.168) 0:19:23.413 ******* 2026-03-25 05:27:27.450148 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450160 | orchestrator | 2026-03-25 05:27:27.450171 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:27:27.450181 | orchestrator | Wednesday 25 March 2026 05:27:07 +0000 (0:00:01.171) 0:19:24.584 ******* 2026-03-25 05:27:27.450192 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450203 | orchestrator | 2026-03-25 05:27:27.450214 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:27:27.450273 | orchestrator | Wednesday 25 March 2026 05:27:08 +0000 (0:00:01.147) 0:19:25.732 ******* 2026-03-25 05:27:27.450294 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450312 | orchestrator | 2026-03-25 05:27:27.450324 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:27:27.450334 | orchestrator | Wednesday 25 March 2026 05:27:09 +0000 (0:00:01.238) 0:19:26.971 ******* 2026-03-25 05:27:27.450345 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450356 | orchestrator | 2026-03-25 05:27:27.450367 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:27:27.450377 | orchestrator | Wednesday 25 March 2026 05:27:11 +0000 (0:00:01.186) 0:19:28.158 ******* 2026-03-25 05:27:27.450388 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450399 | orchestrator | 2026-03-25 05:27:27.450410 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:27:27.450420 | orchestrator | Wednesday 25 March 2026 05:27:12 +0000 (0:00:01.186) 0:19:29.344 ******* 2026-03-25 05:27:27.450431 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450442 | orchestrator | 2026-03-25 05:27:27.450459 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:27:27.450470 | orchestrator | Wednesday 25 March 2026 05:27:13 +0000 (0:00:01.204) 0:19:30.549 ******* 2026-03-25 05:27:27.450481 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450492 | orchestrator | 2026-03-25 05:27:27.450502 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:27:27.450513 | orchestrator | Wednesday 25 March 2026 05:27:14 +0000 (0:00:01.171) 0:19:31.721 ******* 2026-03-25 05:27:27.450524 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450534 | orchestrator | 2026-03-25 05:27:27.450545 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:27:27.450556 | orchestrator | Wednesday 25 March 2026 05:27:15 +0000 (0:00:01.161) 0:19:32.883 ******* 2026-03-25 05:27:27.450566 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450577 | orchestrator | 2026-03-25 05:27:27.450588 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:27:27.450598 | orchestrator | Wednesday 25 March 2026 05:27:16 +0000 (0:00:01.125) 0:19:34.009 ******* 2026-03-25 05:27:27.450631 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450642 | orchestrator | 2026-03-25 05:27:27.450653 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:27:27.450664 | orchestrator | Wednesday 25 March 2026 05:27:18 +0000 (0:00:01.153) 0:19:35.163 ******* 2026-03-25 05:27:27.450675 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450686 | orchestrator | 2026-03-25 05:27:27.450696 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:27:27.450707 | orchestrator | Wednesday 25 March 2026 05:27:19 +0000 (0:00:01.167) 0:19:36.330 ******* 2026-03-25 05:27:27.450718 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450729 | orchestrator | 2026-03-25 05:27:27.450739 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:27:27.450750 | orchestrator | Wednesday 25 March 2026 05:27:20 +0000 (0:00:01.179) 0:19:37.509 ******* 2026-03-25 05:27:27.450761 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450772 | orchestrator | 2026-03-25 05:27:27.450782 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:27:27.450793 | orchestrator | Wednesday 25 March 2026 05:27:21 +0000 (0:00:01.150) 0:19:38.660 ******* 2026-03-25 05:27:27.450804 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450815 | orchestrator | 2026-03-25 05:27:27.450826 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:27:27.450836 | orchestrator | Wednesday 25 March 2026 05:27:22 +0000 (0:00:01.155) 0:19:39.815 ******* 2026-03-25 05:27:27.450847 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450858 | orchestrator | 2026-03-25 05:27:27.450876 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:27:27.450887 | orchestrator | Wednesday 25 March 2026 05:27:23 +0000 (0:00:01.131) 0:19:40.947 ******* 2026-03-25 05:27:27.450897 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450908 | orchestrator | 2026-03-25 05:27:27.450919 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:27:27.450930 | orchestrator | Wednesday 25 March 2026 05:27:25 +0000 (0:00:01.136) 0:19:42.084 ******* 2026-03-25 05:27:27.450940 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:27:27.450951 | orchestrator | 2026-03-25 05:27:27.450962 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:27:27.450972 | orchestrator | Wednesday 25 March 2026 05:27:26 +0000 (0:00:01.185) 0:19:43.269 ******* 2026-03-25 05:27:27.450992 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.260232 | orchestrator | 2026-03-25 05:28:12.260398 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:28:12.260425 | orchestrator | Wednesday 25 March 2026 05:27:27 +0000 (0:00:01.185) 0:19:44.455 ******* 2026-03-25 05:28:12.260444 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.260464 | orchestrator | 2026-03-25 05:28:12.260482 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:28:12.260500 | orchestrator | Wednesday 25 March 2026 05:27:28 +0000 (0:00:01.175) 0:19:45.630 ******* 2026-03-25 05:28:12.260518 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.260535 | orchestrator | 2026-03-25 05:28:12.260553 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:28:12.260572 | orchestrator | Wednesday 25 March 2026 05:27:29 +0000 (0:00:01.136) 0:19:46.767 ******* 2026-03-25 05:28:12.260590 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.260645 | orchestrator | 2026-03-25 05:28:12.260665 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:28:12.260685 | orchestrator | Wednesday 25 March 2026 05:27:30 +0000 (0:00:01.210) 0:19:47.977 ******* 2026-03-25 05:28:12.260703 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.260725 | orchestrator | 2026-03-25 05:28:12.260744 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:28:12.260764 | orchestrator | Wednesday 25 March 2026 05:27:32 +0000 (0:00:01.115) 0:19:49.093 ******* 2026-03-25 05:28:12.260785 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.260805 | orchestrator | 2026-03-25 05:28:12.260826 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:28:12.260846 | orchestrator | Wednesday 25 March 2026 05:27:33 +0000 (0:00:01.146) 0:19:50.239 ******* 2026-03-25 05:28:12.260867 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.260887 | orchestrator | 2026-03-25 05:28:12.260906 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:28:12.260927 | orchestrator | Wednesday 25 March 2026 05:27:34 +0000 (0:00:01.122) 0:19:51.362 ******* 2026-03-25 05:28:12.260946 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.260966 | orchestrator | 2026-03-25 05:28:12.260985 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:28:12.261006 | orchestrator | Wednesday 25 March 2026 05:27:35 +0000 (0:00:01.140) 0:19:52.502 ******* 2026-03-25 05:28:12.261025 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261046 | orchestrator | 2026-03-25 05:28:12.261086 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:28:12.261106 | orchestrator | Wednesday 25 March 2026 05:27:36 +0000 (0:00:01.149) 0:19:53.652 ******* 2026-03-25 05:28:12.261125 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261144 | orchestrator | 2026-03-25 05:28:12.261162 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:28:12.261181 | orchestrator | Wednesday 25 March 2026 05:27:37 +0000 (0:00:01.142) 0:19:54.794 ******* 2026-03-25 05:28:12.261198 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261249 | orchestrator | 2026-03-25 05:28:12.261318 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:28:12.261337 | orchestrator | Wednesday 25 March 2026 05:27:38 +0000 (0:00:01.138) 0:19:55.932 ******* 2026-03-25 05:28:12.261355 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261373 | orchestrator | 2026-03-25 05:28:12.261390 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:28:12.261408 | orchestrator | Wednesday 25 March 2026 05:27:40 +0000 (0:00:01.190) 0:19:57.122 ******* 2026-03-25 05:28:12.261425 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261443 | orchestrator | 2026-03-25 05:28:12.261460 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:28:12.261477 | orchestrator | Wednesday 25 March 2026 05:27:41 +0000 (0:00:01.156) 0:19:58.279 ******* 2026-03-25 05:28:12.261496 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261514 | orchestrator | 2026-03-25 05:28:12.261532 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:28:12.261550 | orchestrator | Wednesday 25 March 2026 05:27:42 +0000 (0:00:01.131) 0:19:59.410 ******* 2026-03-25 05:28:12.261568 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261586 | orchestrator | 2026-03-25 05:28:12.261603 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:28:12.261622 | orchestrator | Wednesday 25 March 2026 05:27:43 +0000 (0:00:01.174) 0:20:00.585 ******* 2026-03-25 05:28:12.261640 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261657 | orchestrator | 2026-03-25 05:28:12.261675 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:28:12.261693 | orchestrator | Wednesday 25 March 2026 05:27:44 +0000 (0:00:01.154) 0:20:01.740 ******* 2026-03-25 05:28:12.261711 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261729 | orchestrator | 2026-03-25 05:28:12.261747 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:28:12.261764 | orchestrator | Wednesday 25 March 2026 05:27:45 +0000 (0:00:01.163) 0:20:02.903 ******* 2026-03-25 05:28:12.261782 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261800 | orchestrator | 2026-03-25 05:28:12.261818 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:28:12.261838 | orchestrator | Wednesday 25 March 2026 05:27:47 +0000 (0:00:01.228) 0:20:04.132 ******* 2026-03-25 05:28:12.261856 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261874 | orchestrator | 2026-03-25 05:28:12.261892 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:28:12.261910 | orchestrator | Wednesday 25 March 2026 05:27:48 +0000 (0:00:01.182) 0:20:05.314 ******* 2026-03-25 05:28:12.261928 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.261945 | orchestrator | 2026-03-25 05:28:12.261963 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:28:12.261981 | orchestrator | Wednesday 25 March 2026 05:27:49 +0000 (0:00:01.245) 0:20:06.560 ******* 2026-03-25 05:28:12.262107 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262132 | orchestrator | 2026-03-25 05:28:12.262152 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:28:12.262171 | orchestrator | Wednesday 25 March 2026 05:27:50 +0000 (0:00:01.175) 0:20:07.735 ******* 2026-03-25 05:28:12.262191 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262210 | orchestrator | 2026-03-25 05:28:12.262230 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:28:12.262270 | orchestrator | Wednesday 25 March 2026 05:27:51 +0000 (0:00:01.199) 0:20:08.935 ******* 2026-03-25 05:28:12.262291 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262309 | orchestrator | 2026-03-25 05:28:12.262327 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:28:12.262345 | orchestrator | Wednesday 25 March 2026 05:27:53 +0000 (0:00:01.122) 0:20:10.057 ******* 2026-03-25 05:28:12.262378 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262397 | orchestrator | 2026-03-25 05:28:12.262414 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:28:12.262433 | orchestrator | Wednesday 25 March 2026 05:27:54 +0000 (0:00:01.159) 0:20:11.217 ******* 2026-03-25 05:28:12.262451 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262469 | orchestrator | 2026-03-25 05:28:12.262486 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:28:12.262504 | orchestrator | Wednesday 25 March 2026 05:27:55 +0000 (0:00:01.267) 0:20:12.485 ******* 2026-03-25 05:28:12.262522 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262540 | orchestrator | 2026-03-25 05:28:12.262558 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:28:12.262576 | orchestrator | Wednesday 25 March 2026 05:27:56 +0000 (0:00:01.145) 0:20:13.631 ******* 2026-03-25 05:28:12.262594 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262612 | orchestrator | 2026-03-25 05:28:12.262630 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:28:12.262648 | orchestrator | Wednesday 25 March 2026 05:27:57 +0000 (0:00:01.220) 0:20:14.851 ******* 2026-03-25 05:28:12.262666 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262684 | orchestrator | 2026-03-25 05:28:12.262701 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:28:12.262720 | orchestrator | Wednesday 25 March 2026 05:27:58 +0000 (0:00:01.151) 0:20:16.003 ******* 2026-03-25 05:28:12.262738 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262756 | orchestrator | 2026-03-25 05:28:12.262787 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:28:12.262807 | orchestrator | Wednesday 25 March 2026 05:28:00 +0000 (0:00:01.153) 0:20:17.157 ******* 2026-03-25 05:28:12.262826 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262845 | orchestrator | 2026-03-25 05:28:12.262863 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:28:12.262882 | orchestrator | Wednesday 25 March 2026 05:28:01 +0000 (0:00:01.213) 0:20:18.370 ******* 2026-03-25 05:28:12.262900 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262918 | orchestrator | 2026-03-25 05:28:12.262935 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:28:12.262953 | orchestrator | Wednesday 25 March 2026 05:28:02 +0000 (0:00:01.164) 0:20:19.535 ******* 2026-03-25 05:28:12.262971 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.262989 | orchestrator | 2026-03-25 05:28:12.263007 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:28:12.263025 | orchestrator | Wednesday 25 March 2026 05:28:03 +0000 (0:00:01.141) 0:20:20.676 ******* 2026-03-25 05:28:12.263042 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.263060 | orchestrator | 2026-03-25 05:28:12.263078 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:28:12.263096 | orchestrator | Wednesday 25 March 2026 05:28:04 +0000 (0:00:01.202) 0:20:21.879 ******* 2026-03-25 05:28:12.263114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:28:12.263132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 05:28:12.263150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:28:12.263169 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.263187 | orchestrator | 2026-03-25 05:28:12.263207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:28:12.263227 | orchestrator | Wednesday 25 March 2026 05:28:06 +0000 (0:00:01.893) 0:20:23.772 ******* 2026-03-25 05:28:12.263245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:28:12.263305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 05:28:12.263323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:28:12.263352 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.263370 | orchestrator | 2026-03-25 05:28:12.263387 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:28:12.263404 | orchestrator | Wednesday 25 March 2026 05:28:08 +0000 (0:00:01.496) 0:20:25.269 ******* 2026-03-25 05:28:12.263422 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:28:12.263439 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 05:28:12.263456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:28:12.263473 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.263490 | orchestrator | 2026-03-25 05:28:12.263507 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:28:12.263524 | orchestrator | Wednesday 25 March 2026 05:28:09 +0000 (0:00:01.611) 0:20:26.881 ******* 2026-03-25 05:28:12.263541 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:12.263558 | orchestrator | 2026-03-25 05:28:12.263576 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:28:12.263594 | orchestrator | Wednesday 25 March 2026 05:28:10 +0000 (0:00:01.119) 0:20:28.000 ******* 2026-03-25 05:28:12.263612 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-25 05:28:12.263648 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:46.092729 | orchestrator | 2026-03-25 05:28:46.092846 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:28:46.092863 | orchestrator | Wednesday 25 March 2026 05:28:12 +0000 (0:00:01.261) 0:20:29.262 ******* 2026-03-25 05:28:46.092876 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:46.092889 | orchestrator | 2026-03-25 05:28:46.092901 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:28:46.092912 | orchestrator | Wednesday 25 March 2026 05:28:13 +0000 (0:00:01.161) 0:20:30.424 ******* 2026-03-25 05:28:46.092923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:28:46.092934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:28:46.092945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:28:46.092956 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:46.092966 | orchestrator | 2026-03-25 05:28:46.092977 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-25 05:28:46.092988 | orchestrator | Wednesday 25 March 2026 05:28:14 +0000 (0:00:01.446) 0:20:31.871 ******* 2026-03-25 05:28:46.092999 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:46.093009 | orchestrator | 2026-03-25 05:28:46.093020 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-25 05:28:46.093031 | orchestrator | Wednesday 25 March 2026 05:28:16 +0000 (0:00:01.179) 0:20:33.050 ******* 2026-03-25 05:28:46.093041 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:46.093052 | orchestrator | 2026-03-25 05:28:46.093063 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-25 05:28:46.093074 | orchestrator | Wednesday 25 March 2026 05:28:17 +0000 (0:00:01.161) 0:20:34.211 ******* 2026-03-25 05:28:46.093084 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:46.093095 | orchestrator | 2026-03-25 05:28:46.093106 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-25 05:28:46.093117 | orchestrator | Wednesday 25 March 2026 05:28:18 +0000 (0:00:01.218) 0:20:35.430 ******* 2026-03-25 05:28:46.093128 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:28:46.093139 | orchestrator | 2026-03-25 05:28:46.093150 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-25 05:28:46.093161 | orchestrator | 2026-03-25 05:28:46.093172 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-25 05:28:46.093183 | orchestrator | Wednesday 25 March 2026 05:28:19 +0000 (0:00:01.018) 0:20:36.449 ******* 2026-03-25 05:28:46.093209 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093243 | orchestrator | 2026-03-25 05:28:46.093255 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:28:46.093267 | orchestrator | Wednesday 25 March 2026 05:28:20 +0000 (0:00:00.881) 0:20:37.330 ******* 2026-03-25 05:28:46.093308 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093321 | orchestrator | 2026-03-25 05:28:46.093333 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:28:46.093346 | orchestrator | Wednesday 25 March 2026 05:28:21 +0000 (0:00:00.775) 0:20:38.106 ******* 2026-03-25 05:28:46.093358 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093371 | orchestrator | 2026-03-25 05:28:46.093383 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:28:46.093395 | orchestrator | Wednesday 25 March 2026 05:28:21 +0000 (0:00:00.771) 0:20:38.877 ******* 2026-03-25 05:28:46.093407 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093420 | orchestrator | 2026-03-25 05:28:46.093432 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:28:46.093445 | orchestrator | Wednesday 25 March 2026 05:28:22 +0000 (0:00:00.792) 0:20:39.670 ******* 2026-03-25 05:28:46.093458 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093470 | orchestrator | 2026-03-25 05:28:46.093481 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:28:46.093492 | orchestrator | Wednesday 25 March 2026 05:28:23 +0000 (0:00:00.818) 0:20:40.489 ******* 2026-03-25 05:28:46.093503 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093513 | orchestrator | 2026-03-25 05:28:46.093524 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:28:46.093535 | orchestrator | Wednesday 25 March 2026 05:28:24 +0000 (0:00:00.831) 0:20:41.321 ******* 2026-03-25 05:28:46.093545 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093556 | orchestrator | 2026-03-25 05:28:46.093567 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:28:46.093577 | orchestrator | Wednesday 25 March 2026 05:28:25 +0000 (0:00:00.777) 0:20:42.098 ******* 2026-03-25 05:28:46.093588 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093599 | orchestrator | 2026-03-25 05:28:46.093609 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:28:46.093620 | orchestrator | Wednesday 25 March 2026 05:28:25 +0000 (0:00:00.825) 0:20:42.924 ******* 2026-03-25 05:28:46.093630 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093641 | orchestrator | 2026-03-25 05:28:46.093652 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:28:46.093662 | orchestrator | Wednesday 25 March 2026 05:28:26 +0000 (0:00:00.794) 0:20:43.719 ******* 2026-03-25 05:28:46.093673 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093684 | orchestrator | 2026-03-25 05:28:46.093694 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:28:46.093705 | orchestrator | Wednesday 25 March 2026 05:28:27 +0000 (0:00:00.797) 0:20:44.516 ******* 2026-03-25 05:28:46.093715 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093726 | orchestrator | 2026-03-25 05:28:46.093737 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:28:46.093747 | orchestrator | Wednesday 25 March 2026 05:28:28 +0000 (0:00:00.803) 0:20:45.320 ******* 2026-03-25 05:28:46.093758 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093768 | orchestrator | 2026-03-25 05:28:46.093780 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:28:46.093790 | orchestrator | Wednesday 25 March 2026 05:28:29 +0000 (0:00:00.915) 0:20:46.235 ******* 2026-03-25 05:28:46.093801 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093812 | orchestrator | 2026-03-25 05:28:46.093840 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:28:46.093852 | orchestrator | Wednesday 25 March 2026 05:28:30 +0000 (0:00:00.802) 0:20:47.038 ******* 2026-03-25 05:28:46.093862 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093881 | orchestrator | 2026-03-25 05:28:46.093892 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:28:46.093903 | orchestrator | Wednesday 25 March 2026 05:28:30 +0000 (0:00:00.791) 0:20:47.829 ******* 2026-03-25 05:28:46.093914 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093925 | orchestrator | 2026-03-25 05:28:46.093936 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:28:46.093946 | orchestrator | Wednesday 25 March 2026 05:28:31 +0000 (0:00:00.768) 0:20:48.598 ******* 2026-03-25 05:28:46.093957 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.093968 | orchestrator | 2026-03-25 05:28:46.093979 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:28:46.093990 | orchestrator | Wednesday 25 March 2026 05:28:32 +0000 (0:00:00.795) 0:20:49.394 ******* 2026-03-25 05:28:46.094000 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094011 | orchestrator | 2026-03-25 05:28:46.094088 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:28:46.094100 | orchestrator | Wednesday 25 March 2026 05:28:33 +0000 (0:00:00.784) 0:20:50.178 ******* 2026-03-25 05:28:46.094111 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094121 | orchestrator | 2026-03-25 05:28:46.094132 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:28:46.094143 | orchestrator | Wednesday 25 March 2026 05:28:33 +0000 (0:00:00.791) 0:20:50.970 ******* 2026-03-25 05:28:46.094154 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094164 | orchestrator | 2026-03-25 05:28:46.094175 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:28:46.094187 | orchestrator | Wednesday 25 March 2026 05:28:34 +0000 (0:00:00.802) 0:20:51.773 ******* 2026-03-25 05:28:46.094197 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094208 | orchestrator | 2026-03-25 05:28:46.094219 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:28:46.094230 | orchestrator | Wednesday 25 March 2026 05:28:35 +0000 (0:00:00.820) 0:20:52.594 ******* 2026-03-25 05:28:46.094241 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094251 | orchestrator | 2026-03-25 05:28:46.094268 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:28:46.094296 | orchestrator | Wednesday 25 March 2026 05:28:36 +0000 (0:00:00.778) 0:20:53.373 ******* 2026-03-25 05:28:46.094307 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094318 | orchestrator | 2026-03-25 05:28:46.094329 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:28:46.094339 | orchestrator | Wednesday 25 March 2026 05:28:37 +0000 (0:00:00.835) 0:20:54.208 ******* 2026-03-25 05:28:46.094350 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094361 | orchestrator | 2026-03-25 05:28:46.094372 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:28:46.094382 | orchestrator | Wednesday 25 March 2026 05:28:37 +0000 (0:00:00.798) 0:20:55.007 ******* 2026-03-25 05:28:46.094393 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094404 | orchestrator | 2026-03-25 05:28:46.094414 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:28:46.094425 | orchestrator | Wednesday 25 March 2026 05:28:38 +0000 (0:00:00.877) 0:20:55.884 ******* 2026-03-25 05:28:46.094436 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094446 | orchestrator | 2026-03-25 05:28:46.094457 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:28:46.094468 | orchestrator | Wednesday 25 March 2026 05:28:39 +0000 (0:00:00.861) 0:20:56.746 ******* 2026-03-25 05:28:46.094478 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094489 | orchestrator | 2026-03-25 05:28:46.094500 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:28:46.094511 | orchestrator | Wednesday 25 March 2026 05:28:40 +0000 (0:00:00.797) 0:20:57.543 ******* 2026-03-25 05:28:46.094531 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094542 | orchestrator | 2026-03-25 05:28:46.094552 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:28:46.094563 | orchestrator | Wednesday 25 March 2026 05:28:41 +0000 (0:00:00.782) 0:20:58.326 ******* 2026-03-25 05:28:46.094574 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094584 | orchestrator | 2026-03-25 05:28:46.094595 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:28:46.094606 | orchestrator | Wednesday 25 March 2026 05:28:42 +0000 (0:00:00.787) 0:20:59.113 ******* 2026-03-25 05:28:46.094616 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094627 | orchestrator | 2026-03-25 05:28:46.094638 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:28:46.094648 | orchestrator | Wednesday 25 March 2026 05:28:42 +0000 (0:00:00.808) 0:20:59.922 ******* 2026-03-25 05:28:46.094659 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094670 | orchestrator | 2026-03-25 05:28:46.094680 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:28:46.094691 | orchestrator | Wednesday 25 March 2026 05:28:43 +0000 (0:00:00.808) 0:21:00.730 ******* 2026-03-25 05:28:46.094702 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094713 | orchestrator | 2026-03-25 05:28:46.094723 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:28:46.094734 | orchestrator | Wednesday 25 March 2026 05:28:44 +0000 (0:00:00.760) 0:21:01.491 ******* 2026-03-25 05:28:46.094744 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094755 | orchestrator | 2026-03-25 05:28:46.094766 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:28:46.094776 | orchestrator | Wednesday 25 March 2026 05:28:45 +0000 (0:00:00.796) 0:21:02.288 ******* 2026-03-25 05:28:46.094787 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:28:46.094798 | orchestrator | 2026-03-25 05:28:46.094817 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:29:17.018554 | orchestrator | Wednesday 25 March 2026 05:28:46 +0000 (0:00:00.805) 0:21:03.094 ******* 2026-03-25 05:29:17.018685 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.018713 | orchestrator | 2026-03-25 05:29:17.018728 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:29:17.018740 | orchestrator | Wednesday 25 March 2026 05:28:46 +0000 (0:00:00.808) 0:21:03.902 ******* 2026-03-25 05:29:17.018751 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.018762 | orchestrator | 2026-03-25 05:29:17.018773 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:29:17.018784 | orchestrator | Wednesday 25 March 2026 05:28:47 +0000 (0:00:00.791) 0:21:04.693 ******* 2026-03-25 05:29:17.018795 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.018805 | orchestrator | 2026-03-25 05:29:17.018816 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:29:17.018827 | orchestrator | Wednesday 25 March 2026 05:28:48 +0000 (0:00:00.802) 0:21:05.496 ******* 2026-03-25 05:29:17.018838 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.018849 | orchestrator | 2026-03-25 05:29:17.018860 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:29:17.018870 | orchestrator | Wednesday 25 March 2026 05:28:49 +0000 (0:00:00.784) 0:21:06.280 ******* 2026-03-25 05:29:17.018881 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.018892 | orchestrator | 2026-03-25 05:29:17.018903 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:29:17.018914 | orchestrator | Wednesday 25 March 2026 05:28:50 +0000 (0:00:00.797) 0:21:07.077 ******* 2026-03-25 05:29:17.018924 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.018935 | orchestrator | 2026-03-25 05:29:17.018946 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:29:17.018958 | orchestrator | Wednesday 25 March 2026 05:28:50 +0000 (0:00:00.821) 0:21:07.899 ******* 2026-03-25 05:29:17.018994 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019005 | orchestrator | 2026-03-25 05:29:17.019017 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:29:17.019027 | orchestrator | Wednesday 25 March 2026 05:28:51 +0000 (0:00:00.788) 0:21:08.688 ******* 2026-03-25 05:29:17.019038 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019049 | orchestrator | 2026-03-25 05:29:17.019075 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:29:17.019090 | orchestrator | Wednesday 25 March 2026 05:28:52 +0000 (0:00:00.814) 0:21:09.502 ******* 2026-03-25 05:29:17.019102 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019115 | orchestrator | 2026-03-25 05:29:17.019128 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:29:17.019140 | orchestrator | Wednesday 25 March 2026 05:28:53 +0000 (0:00:00.794) 0:21:10.296 ******* 2026-03-25 05:29:17.019153 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019165 | orchestrator | 2026-03-25 05:29:17.019178 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:29:17.019190 | orchestrator | Wednesday 25 March 2026 05:28:54 +0000 (0:00:00.805) 0:21:11.102 ******* 2026-03-25 05:29:17.019202 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019215 | orchestrator | 2026-03-25 05:29:17.019229 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:29:17.019242 | orchestrator | Wednesday 25 March 2026 05:28:54 +0000 (0:00:00.811) 0:21:11.913 ******* 2026-03-25 05:29:17.019254 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019267 | orchestrator | 2026-03-25 05:29:17.019279 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:29:17.019292 | orchestrator | Wednesday 25 March 2026 05:28:55 +0000 (0:00:00.823) 0:21:12.737 ******* 2026-03-25 05:29:17.019326 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019339 | orchestrator | 2026-03-25 05:29:17.019352 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:29:17.019427 | orchestrator | Wednesday 25 March 2026 05:28:56 +0000 (0:00:00.880) 0:21:13.618 ******* 2026-03-25 05:29:17.019441 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019452 | orchestrator | 2026-03-25 05:29:17.019463 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:29:17.019474 | orchestrator | Wednesday 25 March 2026 05:28:57 +0000 (0:00:00.774) 0:21:14.393 ******* 2026-03-25 05:29:17.019484 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019495 | orchestrator | 2026-03-25 05:29:17.019506 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:29:17.019517 | orchestrator | Wednesday 25 March 2026 05:28:58 +0000 (0:00:00.895) 0:21:15.289 ******* 2026-03-25 05:29:17.019527 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019538 | orchestrator | 2026-03-25 05:29:17.019549 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:29:17.019560 | orchestrator | Wednesday 25 March 2026 05:28:59 +0000 (0:00:00.801) 0:21:16.090 ******* 2026-03-25 05:29:17.019570 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019581 | orchestrator | 2026-03-25 05:29:17.019592 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:29:17.019604 | orchestrator | Wednesday 25 March 2026 05:28:59 +0000 (0:00:00.797) 0:21:16.888 ******* 2026-03-25 05:29:17.019615 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019625 | orchestrator | 2026-03-25 05:29:17.019636 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:29:17.019647 | orchestrator | Wednesday 25 March 2026 05:29:00 +0000 (0:00:00.810) 0:21:17.699 ******* 2026-03-25 05:29:17.019657 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019668 | orchestrator | 2026-03-25 05:29:17.019679 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:29:17.019699 | orchestrator | Wednesday 25 March 2026 05:29:01 +0000 (0:00:00.888) 0:21:18.587 ******* 2026-03-25 05:29:17.019710 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019721 | orchestrator | 2026-03-25 05:29:17.019751 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:29:17.019763 | orchestrator | Wednesday 25 March 2026 05:29:02 +0000 (0:00:00.821) 0:21:19.409 ******* 2026-03-25 05:29:17.019773 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019784 | orchestrator | 2026-03-25 05:29:17.019796 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:29:17.019806 | orchestrator | Wednesday 25 March 2026 05:29:03 +0000 (0:00:00.832) 0:21:20.242 ******* 2026-03-25 05:29:17.019817 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 05:29:17.019828 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 05:29:17.019839 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 05:29:17.019850 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019860 | orchestrator | 2026-03-25 05:29:17.019871 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:29:17.019882 | orchestrator | Wednesday 25 March 2026 05:29:04 +0000 (0:00:01.057) 0:21:21.299 ******* 2026-03-25 05:29:17.019893 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 05:29:17.019904 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 05:29:17.019915 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 05:29:17.019925 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.019936 | orchestrator | 2026-03-25 05:29:17.019947 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:29:17.019958 | orchestrator | Wednesday 25 March 2026 05:29:05 +0000 (0:00:01.061) 0:21:22.361 ******* 2026-03-25 05:29:17.019969 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 05:29:17.019979 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 05:29:17.019990 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 05:29:17.020001 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.020012 | orchestrator | 2026-03-25 05:29:17.020023 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:29:17.020034 | orchestrator | Wednesday 25 March 2026 05:29:06 +0000 (0:00:01.106) 0:21:23.468 ******* 2026-03-25 05:29:17.020044 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.020055 | orchestrator | 2026-03-25 05:29:17.020072 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:29:17.020084 | orchestrator | Wednesday 25 March 2026 05:29:07 +0000 (0:00:00.797) 0:21:24.266 ******* 2026-03-25 05:29:17.020095 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-25 05:29:17.020106 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.020117 | orchestrator | 2026-03-25 05:29:17.020128 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:29:17.020138 | orchestrator | Wednesday 25 March 2026 05:29:08 +0000 (0:00:00.934) 0:21:25.201 ******* 2026-03-25 05:29:17.020149 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.020160 | orchestrator | 2026-03-25 05:29:17.020171 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:29:17.020182 | orchestrator | Wednesday 25 March 2026 05:29:09 +0000 (0:00:00.885) 0:21:26.086 ******* 2026-03-25 05:29:17.020193 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 05:29:17.020203 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 05:29:17.020214 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 05:29:17.020225 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.020236 | orchestrator | 2026-03-25 05:29:17.020247 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-25 05:29:17.020267 | orchestrator | Wednesday 25 March 2026 05:29:10 +0000 (0:00:01.172) 0:21:27.259 ******* 2026-03-25 05:29:17.020278 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.020289 | orchestrator | 2026-03-25 05:29:17.020299 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-25 05:29:17.020328 | orchestrator | Wednesday 25 March 2026 05:29:11 +0000 (0:00:00.835) 0:21:28.095 ******* 2026-03-25 05:29:17.020339 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.020349 | orchestrator | 2026-03-25 05:29:17.020360 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-25 05:29:17.020371 | orchestrator | Wednesday 25 March 2026 05:29:11 +0000 (0:00:00.823) 0:21:28.919 ******* 2026-03-25 05:29:17.020382 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.020393 | orchestrator | 2026-03-25 05:29:17.020404 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-25 05:29:17.020415 | orchestrator | Wednesday 25 March 2026 05:29:12 +0000 (0:00:00.792) 0:21:29.711 ******* 2026-03-25 05:29:17.020426 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:29:17.020436 | orchestrator | 2026-03-25 05:29:17.020447 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-25 05:29:17.020458 | orchestrator | 2026-03-25 05:29:17.020469 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-25 05:29:17.020480 | orchestrator | Wednesday 25 March 2026 05:29:13 +0000 (0:00:01.058) 0:21:30.769 ******* 2026-03-25 05:29:17.020490 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:17.020501 | orchestrator | 2026-03-25 05:29:17.020512 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:29:17.020523 | orchestrator | Wednesday 25 March 2026 05:29:14 +0000 (0:00:00.810) 0:21:31.580 ******* 2026-03-25 05:29:17.020533 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:17.020544 | orchestrator | 2026-03-25 05:29:17.020555 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:29:17.020566 | orchestrator | Wednesday 25 March 2026 05:29:15 +0000 (0:00:00.858) 0:21:32.438 ******* 2026-03-25 05:29:17.020577 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:17.020587 | orchestrator | 2026-03-25 05:29:17.020598 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:29:17.020609 | orchestrator | Wednesday 25 March 2026 05:29:16 +0000 (0:00:00.822) 0:21:33.260 ******* 2026-03-25 05:29:17.020627 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.377658 | orchestrator | 2026-03-25 05:29:49.377778 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:29:49.377795 | orchestrator | Wednesday 25 March 2026 05:29:17 +0000 (0:00:00.762) 0:21:34.023 ******* 2026-03-25 05:29:49.377806 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.377819 | orchestrator | 2026-03-25 05:29:49.377830 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:29:49.377841 | orchestrator | Wednesday 25 March 2026 05:29:17 +0000 (0:00:00.791) 0:21:34.814 ******* 2026-03-25 05:29:49.377859 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.377878 | orchestrator | 2026-03-25 05:29:49.377896 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:29:49.377914 | orchestrator | Wednesday 25 March 2026 05:29:18 +0000 (0:00:00.774) 0:21:35.589 ******* 2026-03-25 05:29:49.377932 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.377949 | orchestrator | 2026-03-25 05:29:49.377966 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:29:49.377982 | orchestrator | Wednesday 25 March 2026 05:29:19 +0000 (0:00:00.809) 0:21:36.398 ******* 2026-03-25 05:29:49.377998 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378101 | orchestrator | 2026-03-25 05:29:49.378128 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:29:49.378148 | orchestrator | Wednesday 25 March 2026 05:29:20 +0000 (0:00:00.809) 0:21:37.208 ******* 2026-03-25 05:29:49.378187 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378200 | orchestrator | 2026-03-25 05:29:49.378212 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:29:49.378225 | orchestrator | Wednesday 25 March 2026 05:29:21 +0000 (0:00:00.837) 0:21:38.045 ******* 2026-03-25 05:29:49.378237 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378249 | orchestrator | 2026-03-25 05:29:49.378261 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:29:49.378272 | orchestrator | Wednesday 25 March 2026 05:29:21 +0000 (0:00:00.804) 0:21:38.850 ******* 2026-03-25 05:29:49.378283 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378294 | orchestrator | 2026-03-25 05:29:49.378304 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:29:49.378315 | orchestrator | Wednesday 25 March 2026 05:29:22 +0000 (0:00:00.785) 0:21:39.635 ******* 2026-03-25 05:29:49.378326 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378406 | orchestrator | 2026-03-25 05:29:49.378434 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:29:49.378445 | orchestrator | Wednesday 25 March 2026 05:29:23 +0000 (0:00:00.786) 0:21:40.421 ******* 2026-03-25 05:29:49.378455 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378466 | orchestrator | 2026-03-25 05:29:49.378476 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:29:49.378487 | orchestrator | Wednesday 25 March 2026 05:29:24 +0000 (0:00:00.836) 0:21:41.258 ******* 2026-03-25 05:29:49.378498 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378508 | orchestrator | 2026-03-25 05:29:49.378519 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:29:49.378530 | orchestrator | Wednesday 25 March 2026 05:29:25 +0000 (0:00:00.793) 0:21:42.051 ******* 2026-03-25 05:29:49.378541 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378551 | orchestrator | 2026-03-25 05:29:49.378562 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:29:49.378573 | orchestrator | Wednesday 25 March 2026 05:29:25 +0000 (0:00:00.802) 0:21:42.853 ******* 2026-03-25 05:29:49.378583 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378594 | orchestrator | 2026-03-25 05:29:49.378604 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:29:49.378615 | orchestrator | Wednesday 25 March 2026 05:29:26 +0000 (0:00:00.784) 0:21:43.638 ******* 2026-03-25 05:29:49.378625 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378636 | orchestrator | 2026-03-25 05:29:49.378646 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:29:49.378657 | orchestrator | Wednesday 25 March 2026 05:29:27 +0000 (0:00:00.781) 0:21:44.421 ******* 2026-03-25 05:29:49.378667 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378678 | orchestrator | 2026-03-25 05:29:49.378688 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:29:49.378699 | orchestrator | Wednesday 25 March 2026 05:29:28 +0000 (0:00:00.839) 0:21:45.260 ******* 2026-03-25 05:29:49.378709 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378720 | orchestrator | 2026-03-25 05:29:49.378731 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:29:49.378742 | orchestrator | Wednesday 25 March 2026 05:29:29 +0000 (0:00:00.871) 0:21:46.132 ******* 2026-03-25 05:29:49.378753 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378763 | orchestrator | 2026-03-25 05:29:49.378774 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:29:49.378785 | orchestrator | Wednesday 25 March 2026 05:29:29 +0000 (0:00:00.767) 0:21:46.899 ******* 2026-03-25 05:29:49.378795 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378806 | orchestrator | 2026-03-25 05:29:49.378816 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:29:49.378827 | orchestrator | Wednesday 25 March 2026 05:29:30 +0000 (0:00:00.834) 0:21:47.734 ******* 2026-03-25 05:29:49.378847 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378858 | orchestrator | 2026-03-25 05:29:49.378869 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:29:49.378879 | orchestrator | Wednesday 25 March 2026 05:29:31 +0000 (0:00:00.787) 0:21:48.521 ******* 2026-03-25 05:29:49.378890 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378901 | orchestrator | 2026-03-25 05:29:49.378911 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:29:49.378922 | orchestrator | Wednesday 25 March 2026 05:29:32 +0000 (0:00:00.778) 0:21:49.300 ******* 2026-03-25 05:29:49.378933 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.378944 | orchestrator | 2026-03-25 05:29:49.378976 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:29:49.378988 | orchestrator | Wednesday 25 March 2026 05:29:33 +0000 (0:00:00.799) 0:21:50.100 ******* 2026-03-25 05:29:49.378999 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379009 | orchestrator | 2026-03-25 05:29:49.379020 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:29:49.379031 | orchestrator | Wednesday 25 March 2026 05:29:33 +0000 (0:00:00.791) 0:21:50.891 ******* 2026-03-25 05:29:49.379042 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379052 | orchestrator | 2026-03-25 05:29:49.379063 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:29:49.379073 | orchestrator | Wednesday 25 March 2026 05:29:34 +0000 (0:00:00.778) 0:21:51.670 ******* 2026-03-25 05:29:49.379084 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379094 | orchestrator | 2026-03-25 05:29:49.379105 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:29:49.379116 | orchestrator | Wednesday 25 March 2026 05:29:35 +0000 (0:00:00.980) 0:21:52.651 ******* 2026-03-25 05:29:49.379126 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379139 | orchestrator | 2026-03-25 05:29:49.379157 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:29:49.379175 | orchestrator | Wednesday 25 March 2026 05:29:36 +0000 (0:00:00.799) 0:21:53.451 ******* 2026-03-25 05:29:49.379194 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379212 | orchestrator | 2026-03-25 05:29:49.379229 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:29:49.379248 | orchestrator | Wednesday 25 March 2026 05:29:37 +0000 (0:00:00.804) 0:21:54.255 ******* 2026-03-25 05:29:49.379267 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379281 | orchestrator | 2026-03-25 05:29:49.379293 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:29:49.379303 | orchestrator | Wednesday 25 March 2026 05:29:38 +0000 (0:00:00.840) 0:21:55.096 ******* 2026-03-25 05:29:49.379314 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379324 | orchestrator | 2026-03-25 05:29:49.379360 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:29:49.379371 | orchestrator | Wednesday 25 March 2026 05:29:38 +0000 (0:00:00.825) 0:21:55.921 ******* 2026-03-25 05:29:49.379382 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379393 | orchestrator | 2026-03-25 05:29:49.379411 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:29:49.379422 | orchestrator | Wednesday 25 March 2026 05:29:39 +0000 (0:00:00.830) 0:21:56.752 ******* 2026-03-25 05:29:49.379433 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379444 | orchestrator | 2026-03-25 05:29:49.379454 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:29:49.379465 | orchestrator | Wednesday 25 March 2026 05:29:40 +0000 (0:00:00.786) 0:21:57.539 ******* 2026-03-25 05:29:49.379476 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379487 | orchestrator | 2026-03-25 05:29:49.379502 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:29:49.379521 | orchestrator | Wednesday 25 March 2026 05:29:41 +0000 (0:00:00.766) 0:21:58.306 ******* 2026-03-25 05:29:49.379551 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379569 | orchestrator | 2026-03-25 05:29:49.379586 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:29:49.379603 | orchestrator | Wednesday 25 March 2026 05:29:42 +0000 (0:00:00.778) 0:21:59.084 ******* 2026-03-25 05:29:49.379620 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379637 | orchestrator | 2026-03-25 05:29:49.379654 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:29:49.379673 | orchestrator | Wednesday 25 March 2026 05:29:42 +0000 (0:00:00.815) 0:21:59.900 ******* 2026-03-25 05:29:49.379690 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379706 | orchestrator | 2026-03-25 05:29:49.379722 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:29:49.379741 | orchestrator | Wednesday 25 March 2026 05:29:43 +0000 (0:00:00.780) 0:22:00.681 ******* 2026-03-25 05:29:49.379760 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379780 | orchestrator | 2026-03-25 05:29:49.379799 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:29:49.379817 | orchestrator | Wednesday 25 March 2026 05:29:44 +0000 (0:00:00.826) 0:22:01.508 ******* 2026-03-25 05:29:49.379833 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379845 | orchestrator | 2026-03-25 05:29:49.379856 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:29:49.379868 | orchestrator | Wednesday 25 March 2026 05:29:45 +0000 (0:00:00.834) 0:22:02.343 ******* 2026-03-25 05:29:49.379879 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379889 | orchestrator | 2026-03-25 05:29:49.379900 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:29:49.379911 | orchestrator | Wednesday 25 March 2026 05:29:46 +0000 (0:00:00.832) 0:22:03.176 ******* 2026-03-25 05:29:49.379921 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379932 | orchestrator | 2026-03-25 05:29:49.379943 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:29:49.379954 | orchestrator | Wednesday 25 March 2026 05:29:46 +0000 (0:00:00.807) 0:22:03.983 ******* 2026-03-25 05:29:49.379964 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.379975 | orchestrator | 2026-03-25 05:29:49.379986 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:29:49.379997 | orchestrator | Wednesday 25 March 2026 05:29:47 +0000 (0:00:00.836) 0:22:04.820 ******* 2026-03-25 05:29:49.380007 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.380018 | orchestrator | 2026-03-25 05:29:49.380029 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:29:49.380039 | orchestrator | Wednesday 25 March 2026 05:29:48 +0000 (0:00:00.764) 0:22:05.585 ******* 2026-03-25 05:29:49.380050 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:29:49.380061 | orchestrator | 2026-03-25 05:29:49.380083 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:30:31.769429 | orchestrator | Wednesday 25 March 2026 05:29:49 +0000 (0:00:00.795) 0:22:06.380 ******* 2026-03-25 05:30:31.769563 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769579 | orchestrator | 2026-03-25 05:30:31.769591 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:30:31.769601 | orchestrator | Wednesday 25 March 2026 05:29:50 +0000 (0:00:00.770) 0:22:07.151 ******* 2026-03-25 05:30:31.769611 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769621 | orchestrator | 2026-03-25 05:30:31.769630 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:30:31.769640 | orchestrator | Wednesday 25 March 2026 05:29:51 +0000 (0:00:00.898) 0:22:08.050 ******* 2026-03-25 05:30:31.769650 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769659 | orchestrator | 2026-03-25 05:30:31.769669 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:30:31.769702 | orchestrator | Wednesday 25 March 2026 05:29:51 +0000 (0:00:00.828) 0:22:08.879 ******* 2026-03-25 05:30:31.769712 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769722 | orchestrator | 2026-03-25 05:30:31.769731 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:30:31.769742 | orchestrator | Wednesday 25 March 2026 05:29:52 +0000 (0:00:00.899) 0:22:09.778 ******* 2026-03-25 05:30:31.769751 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769761 | orchestrator | 2026-03-25 05:30:31.769771 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:30:31.769780 | orchestrator | Wednesday 25 March 2026 05:29:53 +0000 (0:00:00.821) 0:22:10.599 ******* 2026-03-25 05:30:31.769790 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769799 | orchestrator | 2026-03-25 05:30:31.769809 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:30:31.769820 | orchestrator | Wednesday 25 March 2026 05:29:54 +0000 (0:00:00.787) 0:22:11.387 ******* 2026-03-25 05:30:31.769830 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769839 | orchestrator | 2026-03-25 05:30:31.769849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:30:31.769858 | orchestrator | Wednesday 25 March 2026 05:29:55 +0000 (0:00:00.782) 0:22:12.169 ******* 2026-03-25 05:30:31.769868 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769877 | orchestrator | 2026-03-25 05:30:31.769900 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:30:31.769910 | orchestrator | Wednesday 25 March 2026 05:29:55 +0000 (0:00:00.816) 0:22:12.985 ******* 2026-03-25 05:30:31.769919 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769929 | orchestrator | 2026-03-25 05:30:31.769938 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:30:31.769948 | orchestrator | Wednesday 25 March 2026 05:29:56 +0000 (0:00:00.764) 0:22:13.750 ******* 2026-03-25 05:30:31.769957 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.769967 | orchestrator | 2026-03-25 05:30:31.769976 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:30:31.769985 | orchestrator | Wednesday 25 March 2026 05:29:57 +0000 (0:00:00.788) 0:22:14.539 ******* 2026-03-25 05:30:31.769995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 05:30:31.770005 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 05:30:31.770014 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 05:30:31.770081 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770091 | orchestrator | 2026-03-25 05:30:31.770100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:30:31.770110 | orchestrator | Wednesday 25 March 2026 05:29:59 +0000 (0:00:01.509) 0:22:16.048 ******* 2026-03-25 05:30:31.770120 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 05:30:31.770129 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 05:30:31.770139 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 05:30:31.770148 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770158 | orchestrator | 2026-03-25 05:30:31.770167 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:30:31.770177 | orchestrator | Wednesday 25 March 2026 05:30:00 +0000 (0:00:01.504) 0:22:17.553 ******* 2026-03-25 05:30:31.770186 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 05:30:31.770196 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 05:30:31.770205 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 05:30:31.770214 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770224 | orchestrator | 2026-03-25 05:30:31.770233 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:30:31.770251 | orchestrator | Wednesday 25 March 2026 05:30:01 +0000 (0:00:01.060) 0:22:18.614 ******* 2026-03-25 05:30:31.770260 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770270 | orchestrator | 2026-03-25 05:30:31.770279 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:30:31.770289 | orchestrator | Wednesday 25 March 2026 05:30:02 +0000 (0:00:00.815) 0:22:19.429 ******* 2026-03-25 05:30:31.770300 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-25 05:30:31.770309 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770319 | orchestrator | 2026-03-25 05:30:31.770328 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:30:31.770338 | orchestrator | Wednesday 25 March 2026 05:30:03 +0000 (0:00:00.914) 0:22:20.343 ******* 2026-03-25 05:30:31.770347 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770357 | orchestrator | 2026-03-25 05:30:31.770386 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:30:31.770396 | orchestrator | Wednesday 25 March 2026 05:30:04 +0000 (0:00:00.856) 0:22:21.200 ******* 2026-03-25 05:30:31.770406 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 05:30:31.770431 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 05:30:31.770441 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 05:30:31.770451 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770461 | orchestrator | 2026-03-25 05:30:31.770470 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-25 05:30:31.770479 | orchestrator | Wednesday 25 March 2026 05:30:05 +0000 (0:00:01.124) 0:22:22.325 ******* 2026-03-25 05:30:31.770489 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770498 | orchestrator | 2026-03-25 05:30:31.770508 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-25 05:30:31.770517 | orchestrator | Wednesday 25 March 2026 05:30:06 +0000 (0:00:00.792) 0:22:23.118 ******* 2026-03-25 05:30:31.770527 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770536 | orchestrator | 2026-03-25 05:30:31.770545 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-25 05:30:31.770555 | orchestrator | Wednesday 25 March 2026 05:30:06 +0000 (0:00:00.760) 0:22:23.878 ******* 2026-03-25 05:30:31.770564 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770574 | orchestrator | 2026-03-25 05:30:31.770583 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-25 05:30:31.770592 | orchestrator | Wednesday 25 March 2026 05:30:07 +0000 (0:00:00.754) 0:22:24.633 ******* 2026-03-25 05:30:31.770602 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:30:31.770611 | orchestrator | 2026-03-25 05:30:31.770620 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-25 05:30:31.770630 | orchestrator | 2026-03-25 05:30:31.770639 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-25 05:30:31.770649 | orchestrator | Wednesday 25 March 2026 05:30:08 +0000 (0:00:01.373) 0:22:26.007 ******* 2026-03-25 05:30:31.770658 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:30:31.770668 | orchestrator | 2026-03-25 05:30:31.770678 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-25 05:30:31.770687 | orchestrator | Wednesday 25 March 2026 05:30:11 +0000 (0:00:02.755) 0:22:28.762 ******* 2026-03-25 05:30:31.770697 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:30:31.770706 | orchestrator | 2026-03-25 05:30:31.770716 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:30:31.770725 | orchestrator | Wednesday 25 March 2026 05:30:14 +0000 (0:00:02.350) 0:22:31.113 ******* 2026-03-25 05:30:31.770740 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-25 05:30:31.770750 | orchestrator | 2026-03-25 05:30:31.770759 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:30:31.770769 | orchestrator | Wednesday 25 March 2026 05:30:15 +0000 (0:00:01.212) 0:22:32.325 ******* 2026-03-25 05:30:31.770785 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:31.770794 | orchestrator | 2026-03-25 05:30:31.770804 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:30:31.770813 | orchestrator | Wednesday 25 March 2026 05:30:16 +0000 (0:00:01.473) 0:22:33.799 ******* 2026-03-25 05:30:31.770823 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:31.770832 | orchestrator | 2026-03-25 05:30:31.770842 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:30:31.770852 | orchestrator | Wednesday 25 March 2026 05:30:17 +0000 (0:00:01.204) 0:22:35.004 ******* 2026-03-25 05:30:31.770861 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:31.770871 | orchestrator | 2026-03-25 05:30:31.770880 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:30:31.770890 | orchestrator | Wednesday 25 March 2026 05:30:19 +0000 (0:00:01.627) 0:22:36.632 ******* 2026-03-25 05:30:31.770900 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:31.770909 | orchestrator | 2026-03-25 05:30:31.770919 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:30:31.770928 | orchestrator | Wednesday 25 March 2026 05:30:20 +0000 (0:00:01.165) 0:22:37.797 ******* 2026-03-25 05:30:31.770938 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:31.770947 | orchestrator | 2026-03-25 05:30:31.770957 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:30:31.770966 | orchestrator | Wednesday 25 March 2026 05:30:21 +0000 (0:00:01.152) 0:22:38.949 ******* 2026-03-25 05:30:31.770976 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:31.770985 | orchestrator | 2026-03-25 05:30:31.770995 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:30:31.771006 | orchestrator | Wednesday 25 March 2026 05:30:23 +0000 (0:00:01.194) 0:22:40.144 ******* 2026-03-25 05:30:31.771015 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:31.771025 | orchestrator | 2026-03-25 05:30:31.771034 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:30:31.771044 | orchestrator | Wednesday 25 March 2026 05:30:24 +0000 (0:00:01.151) 0:22:41.295 ******* 2026-03-25 05:30:31.771053 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:31.771063 | orchestrator | 2026-03-25 05:30:31.771072 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:30:31.771082 | orchestrator | Wednesday 25 March 2026 05:30:25 +0000 (0:00:01.193) 0:22:42.489 ******* 2026-03-25 05:30:31.771091 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:30:31.771101 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:30:31.771110 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:30:31.771120 | orchestrator | 2026-03-25 05:30:31.771129 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:30:31.771139 | orchestrator | Wednesday 25 March 2026 05:30:27 +0000 (0:00:02.099) 0:22:44.589 ******* 2026-03-25 05:30:31.771148 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:31.771158 | orchestrator | 2026-03-25 05:30:31.771168 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:30:31.771177 | orchestrator | Wednesday 25 March 2026 05:30:28 +0000 (0:00:01.343) 0:22:45.932 ******* 2026-03-25 05:30:31.771187 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:30:31.771202 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:30:55.119745 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:30:55.119827 | orchestrator | 2026-03-25 05:30:55.119835 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:30:55.119842 | orchestrator | Wednesday 25 March 2026 05:30:31 +0000 (0:00:02.840) 0:22:48.772 ******* 2026-03-25 05:30:55.119847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:30:55.119866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:30:55.119871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:30:55.119876 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.119881 | orchestrator | 2026-03-25 05:30:55.119885 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:30:55.119890 | orchestrator | Wednesday 25 March 2026 05:30:33 +0000 (0:00:01.456) 0:22:50.229 ******* 2026-03-25 05:30:55.119896 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:30:55.119904 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:30:55.119909 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:30:55.119913 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.119918 | orchestrator | 2026-03-25 05:30:55.119923 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:30:55.119931 | orchestrator | Wednesday 25 March 2026 05:30:34 +0000 (0:00:01.650) 0:22:51.879 ******* 2026-03-25 05:30:55.119938 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:30:55.119945 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:30:55.119949 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:30:55.119954 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.119959 | orchestrator | 2026-03-25 05:30:55.119963 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:30:55.119968 | orchestrator | Wednesday 25 March 2026 05:30:36 +0000 (0:00:01.178) 0:22:53.058 ******* 2026-03-25 05:30:55.119974 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:30:29.446646', 'end': '2026-03-25 05:30:29.498773', 'delta': '0:00:00.052127', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:30:55.119996 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:30:29.980070', 'end': '2026-03-25 05:30:30.025989', 'delta': '0:00:00.045919', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:30:55.120002 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:30:30.564196', 'end': '2026-03-25 05:30:30.612151', 'delta': '0:00:00.047955', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:30:55.120007 | orchestrator | 2026-03-25 05:30:55.120012 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:30:55.120017 | orchestrator | Wednesday 25 March 2026 05:30:37 +0000 (0:00:01.212) 0:22:54.270 ******* 2026-03-25 05:30:55.120021 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:55.120026 | orchestrator | 2026-03-25 05:30:55.120031 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:30:55.120037 | orchestrator | Wednesday 25 March 2026 05:30:38 +0000 (0:00:01.263) 0:22:55.534 ******* 2026-03-25 05:30:55.120042 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120047 | orchestrator | 2026-03-25 05:30:55.120051 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:30:55.120056 | orchestrator | Wednesday 25 March 2026 05:30:39 +0000 (0:00:01.322) 0:22:56.856 ******* 2026-03-25 05:30:55.120060 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:55.120065 | orchestrator | 2026-03-25 05:30:55.120070 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:30:55.120074 | orchestrator | Wednesday 25 March 2026 05:30:41 +0000 (0:00:01.181) 0:22:58.038 ******* 2026-03-25 05:30:55.120079 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:55.120083 | orchestrator | 2026-03-25 05:30:55.120088 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:30:55.120092 | orchestrator | Wednesday 25 March 2026 05:30:43 +0000 (0:00:02.056) 0:23:00.095 ******* 2026-03-25 05:30:55.120097 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:30:55.120101 | orchestrator | 2026-03-25 05:30:55.120106 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:30:55.120111 | orchestrator | Wednesday 25 March 2026 05:30:44 +0000 (0:00:01.150) 0:23:01.245 ******* 2026-03-25 05:30:55.120115 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120120 | orchestrator | 2026-03-25 05:30:55.120124 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:30:55.120129 | orchestrator | Wednesday 25 March 2026 05:30:45 +0000 (0:00:01.111) 0:23:02.356 ******* 2026-03-25 05:30:55.120133 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120138 | orchestrator | 2026-03-25 05:30:55.120142 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:30:55.120147 | orchestrator | Wednesday 25 March 2026 05:30:47 +0000 (0:00:01.695) 0:23:04.052 ******* 2026-03-25 05:30:55.120152 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120160 | orchestrator | 2026-03-25 05:30:55.120164 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:30:55.120169 | orchestrator | Wednesday 25 March 2026 05:30:48 +0000 (0:00:01.164) 0:23:05.217 ******* 2026-03-25 05:30:55.120173 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120178 | orchestrator | 2026-03-25 05:30:55.120182 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:30:55.120187 | orchestrator | Wednesday 25 March 2026 05:30:49 +0000 (0:00:01.198) 0:23:06.416 ******* 2026-03-25 05:30:55.120191 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120196 | orchestrator | 2026-03-25 05:30:55.120200 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:30:55.120205 | orchestrator | Wednesday 25 March 2026 05:30:50 +0000 (0:00:01.168) 0:23:07.584 ******* 2026-03-25 05:30:55.120209 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120214 | orchestrator | 2026-03-25 05:30:55.120218 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:30:55.120223 | orchestrator | Wednesday 25 March 2026 05:30:51 +0000 (0:00:01.143) 0:23:08.727 ******* 2026-03-25 05:30:55.120227 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120232 | orchestrator | 2026-03-25 05:30:55.120236 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:30:55.120241 | orchestrator | Wednesday 25 March 2026 05:30:52 +0000 (0:00:01.147) 0:23:09.874 ******* 2026-03-25 05:30:55.120245 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120250 | orchestrator | 2026-03-25 05:30:55.120254 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:30:55.120259 | orchestrator | Wednesday 25 March 2026 05:30:53 +0000 (0:00:01.126) 0:23:11.001 ******* 2026-03-25 05:30:55.120263 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:55.120268 | orchestrator | 2026-03-25 05:30:55.120275 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:30:57.606454 | orchestrator | Wednesday 25 March 2026 05:30:55 +0000 (0:00:01.120) 0:23:12.122 ******* 2026-03-25 05:30:57.606580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:30:57.606610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:30:57.606629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:30:57.606671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:30:57.606710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:30:57.606723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:30:57.606734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:30:57.606770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '225bc811', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:30:57.606790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:30:57.606802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:30:57.606821 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:30:57.606834 | orchestrator | 2026-03-25 05:30:57.606846 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:30:57.606858 | orchestrator | Wednesday 25 March 2026 05:30:56 +0000 (0:00:01.280) 0:23:13.402 ******* 2026-03-25 05:30:57.606871 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:30:57.606884 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:30:57.606904 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:31:08.410902 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:31:08.411023 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:31:08.411076 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:31:08.411091 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:31:08.411125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '225bc811', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:31:08.411141 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:31:08.411175 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:31:08.411197 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:08.411220 | orchestrator | 2026-03-25 05:31:08.411239 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:31:08.411253 | orchestrator | Wednesday 25 March 2026 05:30:57 +0000 (0:00:01.206) 0:23:14.609 ******* 2026-03-25 05:31:08.411264 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:08.411275 | orchestrator | 2026-03-25 05:31:08.411286 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:31:08.411297 | orchestrator | Wednesday 25 March 2026 05:30:59 +0000 (0:00:01.578) 0:23:16.188 ******* 2026-03-25 05:31:08.411307 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:08.411318 | orchestrator | 2026-03-25 05:31:08.411329 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:31:08.411340 | orchestrator | Wednesday 25 March 2026 05:31:00 +0000 (0:00:01.185) 0:23:17.374 ******* 2026-03-25 05:31:08.411351 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:08.411361 | orchestrator | 2026-03-25 05:31:08.411372 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:31:08.411383 | orchestrator | Wednesday 25 March 2026 05:31:01 +0000 (0:00:01.494) 0:23:18.868 ******* 2026-03-25 05:31:08.411393 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:08.411440 | orchestrator | 2026-03-25 05:31:08.411454 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:31:08.411466 | orchestrator | Wednesday 25 March 2026 05:31:03 +0000 (0:00:01.187) 0:23:20.055 ******* 2026-03-25 05:31:08.411478 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:08.411491 | orchestrator | 2026-03-25 05:31:08.411503 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:31:08.411516 | orchestrator | Wednesday 25 March 2026 05:31:04 +0000 (0:00:01.272) 0:23:21.328 ******* 2026-03-25 05:31:08.411528 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:08.411541 | orchestrator | 2026-03-25 05:31:08.411553 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:31:08.411566 | orchestrator | Wednesday 25 March 2026 05:31:05 +0000 (0:00:01.198) 0:23:22.527 ******* 2026-03-25 05:31:08.411579 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:31:08.411592 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-25 05:31:08.411604 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-25 05:31:08.411616 | orchestrator | 2026-03-25 05:31:08.411629 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:31:08.411642 | orchestrator | Wednesday 25 March 2026 05:31:07 +0000 (0:00:01.687) 0:23:24.214 ******* 2026-03-25 05:31:08.411655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:31:08.411668 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:31:08.411681 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:31:08.411693 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:08.411705 | orchestrator | 2026-03-25 05:31:08.411727 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:31:53.542430 | orchestrator | Wednesday 25 March 2026 05:31:08 +0000 (0:00:01.199) 0:23:25.414 ******* 2026-03-25 05:31:53.542638 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.542657 | orchestrator | 2026-03-25 05:31:53.542669 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:31:53.542680 | orchestrator | Wednesday 25 March 2026 05:31:09 +0000 (0:00:01.137) 0:23:26.551 ******* 2026-03-25 05:31:53.542691 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:31:53.542703 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:31:53.542714 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:31:53.542725 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:31:53.542735 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:31:53.542746 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:31:53.542756 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:31:53.542767 | orchestrator | 2026-03-25 05:31:53.542778 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:31:53.542788 | orchestrator | Wednesday 25 March 2026 05:31:11 +0000 (0:00:01.918) 0:23:28.470 ******* 2026-03-25 05:31:53.542799 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:31:53.542810 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:31:53.542820 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:31:53.542830 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:31:53.542856 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:31:53.542867 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:31:53.542878 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:31:53.542888 | orchestrator | 2026-03-25 05:31:53.542899 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:31:53.542910 | orchestrator | Wednesday 25 March 2026 05:31:14 +0000 (0:00:02.859) 0:23:31.330 ******* 2026-03-25 05:31:53.542920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-25 05:31:53.542931 | orchestrator | 2026-03-25 05:31:53.542941 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 05:31:53.542952 | orchestrator | Wednesday 25 March 2026 05:31:15 +0000 (0:00:01.172) 0:23:32.502 ******* 2026-03-25 05:31:53.542963 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-25 05:31:53.542976 | orchestrator | 2026-03-25 05:31:53.542988 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 05:31:53.543000 | orchestrator | Wednesday 25 March 2026 05:31:16 +0000 (0:00:01.173) 0:23:33.676 ******* 2026-03-25 05:31:53.543012 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:53.543024 | orchestrator | 2026-03-25 05:31:53.543036 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 05:31:53.543048 | orchestrator | Wednesday 25 March 2026 05:31:18 +0000 (0:00:01.642) 0:23:35.318 ******* 2026-03-25 05:31:53.543061 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543073 | orchestrator | 2026-03-25 05:31:53.543085 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 05:31:53.543097 | orchestrator | Wednesday 25 March 2026 05:31:19 +0000 (0:00:01.193) 0:23:36.512 ******* 2026-03-25 05:31:53.543109 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543121 | orchestrator | 2026-03-25 05:31:53.543133 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 05:31:53.543153 | orchestrator | Wednesday 25 March 2026 05:31:20 +0000 (0:00:01.312) 0:23:37.824 ******* 2026-03-25 05:31:53.543167 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543178 | orchestrator | 2026-03-25 05:31:53.543190 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 05:31:53.543202 | orchestrator | Wednesday 25 March 2026 05:31:21 +0000 (0:00:01.190) 0:23:39.014 ******* 2026-03-25 05:31:53.543215 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:53.543227 | orchestrator | 2026-03-25 05:31:53.543239 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 05:31:53.543251 | orchestrator | Wednesday 25 March 2026 05:31:23 +0000 (0:00:01.512) 0:23:40.527 ******* 2026-03-25 05:31:53.543263 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543275 | orchestrator | 2026-03-25 05:31:53.543286 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 05:31:53.543298 | orchestrator | Wednesday 25 March 2026 05:31:24 +0000 (0:00:01.201) 0:23:41.728 ******* 2026-03-25 05:31:53.543312 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543325 | orchestrator | 2026-03-25 05:31:53.543335 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 05:31:53.543346 | orchestrator | Wednesday 25 March 2026 05:31:25 +0000 (0:00:01.151) 0:23:42.880 ******* 2026-03-25 05:31:53.543356 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:53.543367 | orchestrator | 2026-03-25 05:31:53.543377 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 05:31:53.543387 | orchestrator | Wednesday 25 March 2026 05:31:27 +0000 (0:00:01.706) 0:23:44.586 ******* 2026-03-25 05:31:53.543398 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:53.543408 | orchestrator | 2026-03-25 05:31:53.543419 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 05:31:53.543464 | orchestrator | Wednesday 25 March 2026 05:31:29 +0000 (0:00:01.584) 0:23:46.171 ******* 2026-03-25 05:31:53.543477 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543488 | orchestrator | 2026-03-25 05:31:53.543498 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:31:53.543509 | orchestrator | Wednesday 25 March 2026 05:31:30 +0000 (0:00:01.107) 0:23:47.279 ******* 2026-03-25 05:31:53.543519 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:53.543530 | orchestrator | 2026-03-25 05:31:53.543541 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:31:53.543551 | orchestrator | Wednesday 25 March 2026 05:31:31 +0000 (0:00:01.307) 0:23:48.587 ******* 2026-03-25 05:31:53.543563 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543573 | orchestrator | 2026-03-25 05:31:53.543584 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:31:53.543595 | orchestrator | Wednesday 25 March 2026 05:31:32 +0000 (0:00:01.173) 0:23:49.760 ******* 2026-03-25 05:31:53.543605 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543616 | orchestrator | 2026-03-25 05:31:53.543626 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:31:53.543637 | orchestrator | Wednesday 25 March 2026 05:31:33 +0000 (0:00:01.159) 0:23:50.920 ******* 2026-03-25 05:31:53.543647 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543658 | orchestrator | 2026-03-25 05:31:53.543669 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:31:53.543679 | orchestrator | Wednesday 25 March 2026 05:31:35 +0000 (0:00:01.223) 0:23:52.144 ******* 2026-03-25 05:31:53.543689 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543700 | orchestrator | 2026-03-25 05:31:53.543711 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:31:53.543721 | orchestrator | Wednesday 25 March 2026 05:31:36 +0000 (0:00:01.159) 0:23:53.303 ******* 2026-03-25 05:31:53.543732 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543742 | orchestrator | 2026-03-25 05:31:53.543753 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:31:53.543771 | orchestrator | Wednesday 25 March 2026 05:31:37 +0000 (0:00:01.171) 0:23:54.475 ******* 2026-03-25 05:31:53.543782 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:53.543792 | orchestrator | 2026-03-25 05:31:53.543809 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:31:53.543820 | orchestrator | Wednesday 25 March 2026 05:31:38 +0000 (0:00:01.135) 0:23:55.611 ******* 2026-03-25 05:31:53.543830 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:53.543841 | orchestrator | 2026-03-25 05:31:53.543851 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:31:53.543862 | orchestrator | Wednesday 25 March 2026 05:31:39 +0000 (0:00:01.178) 0:23:56.790 ******* 2026-03-25 05:31:53.543872 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:31:53.543883 | orchestrator | 2026-03-25 05:31:53.543893 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:31:53.543904 | orchestrator | Wednesday 25 March 2026 05:31:40 +0000 (0:00:01.188) 0:23:57.978 ******* 2026-03-25 05:31:53.543915 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543925 | orchestrator | 2026-03-25 05:31:53.543936 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:31:53.543946 | orchestrator | Wednesday 25 March 2026 05:31:42 +0000 (0:00:01.157) 0:23:59.135 ******* 2026-03-25 05:31:53.543957 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.543967 | orchestrator | 2026-03-25 05:31:53.543978 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:31:53.543989 | orchestrator | Wednesday 25 March 2026 05:31:43 +0000 (0:00:01.113) 0:24:00.249 ******* 2026-03-25 05:31:53.543999 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.544010 | orchestrator | 2026-03-25 05:31:53.544020 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:31:53.544031 | orchestrator | Wednesday 25 March 2026 05:31:44 +0000 (0:00:01.129) 0:24:01.378 ******* 2026-03-25 05:31:53.544041 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.544052 | orchestrator | 2026-03-25 05:31:53.544062 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:31:53.544073 | orchestrator | Wednesday 25 March 2026 05:31:45 +0000 (0:00:01.115) 0:24:02.494 ******* 2026-03-25 05:31:53.544084 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.544094 | orchestrator | 2026-03-25 05:31:53.544105 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:31:53.544115 | orchestrator | Wednesday 25 March 2026 05:31:46 +0000 (0:00:01.144) 0:24:03.638 ******* 2026-03-25 05:31:53.544126 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.544137 | orchestrator | 2026-03-25 05:31:53.544147 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:31:53.544158 | orchestrator | Wednesday 25 March 2026 05:31:47 +0000 (0:00:01.148) 0:24:04.787 ******* 2026-03-25 05:31:53.544168 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.544179 | orchestrator | 2026-03-25 05:31:53.544189 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:31:53.544199 | orchestrator | Wednesday 25 March 2026 05:31:48 +0000 (0:00:01.122) 0:24:05.910 ******* 2026-03-25 05:31:53.544210 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.544221 | orchestrator | 2026-03-25 05:31:53.544231 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:31:53.544242 | orchestrator | Wednesday 25 March 2026 05:31:50 +0000 (0:00:01.184) 0:24:07.094 ******* 2026-03-25 05:31:53.544253 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.544263 | orchestrator | 2026-03-25 05:31:53.544273 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:31:53.544284 | orchestrator | Wednesday 25 March 2026 05:31:51 +0000 (0:00:01.177) 0:24:08.271 ******* 2026-03-25 05:31:53.544294 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.544305 | orchestrator | 2026-03-25 05:31:53.544316 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:31:53.544333 | orchestrator | Wednesday 25 March 2026 05:31:52 +0000 (0:00:01.138) 0:24:09.410 ******* 2026-03-25 05:31:53.544344 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:31:53.544354 | orchestrator | 2026-03-25 05:31:53.544371 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:32:43.184031 | orchestrator | Wednesday 25 March 2026 05:31:53 +0000 (0:00:01.132) 0:24:10.542 ******* 2026-03-25 05:32:43.184150 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.184167 | orchestrator | 2026-03-25 05:32:43.184180 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:32:43.184191 | orchestrator | Wednesday 25 March 2026 05:31:54 +0000 (0:00:01.179) 0:24:11.722 ******* 2026-03-25 05:32:43.184202 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:32:43.184214 | orchestrator | 2026-03-25 05:32:43.184225 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:32:43.184236 | orchestrator | Wednesday 25 March 2026 05:31:56 +0000 (0:00:02.018) 0:24:13.741 ******* 2026-03-25 05:32:43.184247 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:32:43.184258 | orchestrator | 2026-03-25 05:32:43.184269 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:32:43.184280 | orchestrator | Wednesday 25 March 2026 05:31:59 +0000 (0:00:02.488) 0:24:16.229 ******* 2026-03-25 05:32:43.184291 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-25 05:32:43.184302 | orchestrator | 2026-03-25 05:32:43.184313 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 05:32:43.184324 | orchestrator | Wednesday 25 March 2026 05:32:00 +0000 (0:00:01.158) 0:24:17.388 ******* 2026-03-25 05:32:43.184334 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.184345 | orchestrator | 2026-03-25 05:32:43.184356 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 05:32:43.184366 | orchestrator | Wednesday 25 March 2026 05:32:01 +0000 (0:00:01.164) 0:24:18.552 ******* 2026-03-25 05:32:43.184377 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.184388 | orchestrator | 2026-03-25 05:32:43.184398 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 05:32:43.184409 | orchestrator | Wednesday 25 March 2026 05:32:02 +0000 (0:00:01.151) 0:24:19.704 ******* 2026-03-25 05:32:43.184420 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 05:32:43.184431 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 05:32:43.184442 | orchestrator | 2026-03-25 05:32:43.184453 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 05:32:43.184463 | orchestrator | Wednesday 25 March 2026 05:32:04 +0000 (0:00:02.086) 0:24:21.790 ******* 2026-03-25 05:32:43.184474 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:32:43.184513 | orchestrator | 2026-03-25 05:32:43.184525 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 05:32:43.184536 | orchestrator | Wednesday 25 March 2026 05:32:06 +0000 (0:00:01.553) 0:24:23.344 ******* 2026-03-25 05:32:43.184546 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.184557 | orchestrator | 2026-03-25 05:32:43.184568 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 05:32:43.184581 | orchestrator | Wednesday 25 March 2026 05:32:07 +0000 (0:00:01.128) 0:24:24.472 ******* 2026-03-25 05:32:43.184593 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.184606 | orchestrator | 2026-03-25 05:32:43.184617 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:32:43.184631 | orchestrator | Wednesday 25 March 2026 05:32:08 +0000 (0:00:01.148) 0:24:25.621 ******* 2026-03-25 05:32:43.184643 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.184655 | orchestrator | 2026-03-25 05:32:43.184666 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:32:43.184679 | orchestrator | Wednesday 25 March 2026 05:32:09 +0000 (0:00:01.154) 0:24:26.776 ******* 2026-03-25 05:32:43.184718 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-25 05:32:43.184730 | orchestrator | 2026-03-25 05:32:43.184743 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 05:32:43.184756 | orchestrator | Wednesday 25 March 2026 05:32:10 +0000 (0:00:01.182) 0:24:27.958 ******* 2026-03-25 05:32:43.184768 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:32:43.184780 | orchestrator | 2026-03-25 05:32:43.184793 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 05:32:43.184806 | orchestrator | Wednesday 25 March 2026 05:32:12 +0000 (0:00:01.718) 0:24:29.677 ******* 2026-03-25 05:32:43.184817 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 05:32:43.184829 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 05:32:43.184840 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 05:32:43.184852 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.184864 | orchestrator | 2026-03-25 05:32:43.184876 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 05:32:43.184888 | orchestrator | Wednesday 25 March 2026 05:32:13 +0000 (0:00:01.127) 0:24:30.805 ******* 2026-03-25 05:32:43.184901 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.184913 | orchestrator | 2026-03-25 05:32:43.184925 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 05:32:43.184938 | orchestrator | Wednesday 25 March 2026 05:32:14 +0000 (0:00:01.192) 0:24:31.998 ******* 2026-03-25 05:32:43.184950 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.184962 | orchestrator | 2026-03-25 05:32:43.184974 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 05:32:43.184985 | orchestrator | Wednesday 25 March 2026 05:32:16 +0000 (0:00:01.197) 0:24:33.195 ******* 2026-03-25 05:32:43.184995 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185006 | orchestrator | 2026-03-25 05:32:43.185016 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 05:32:43.185027 | orchestrator | Wednesday 25 March 2026 05:32:17 +0000 (0:00:01.142) 0:24:34.338 ******* 2026-03-25 05:32:43.185037 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185048 | orchestrator | 2026-03-25 05:32:43.185074 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 05:32:43.185086 | orchestrator | Wednesday 25 March 2026 05:32:18 +0000 (0:00:01.132) 0:24:35.470 ******* 2026-03-25 05:32:43.185097 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185107 | orchestrator | 2026-03-25 05:32:43.185118 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:32:43.185129 | orchestrator | Wednesday 25 March 2026 05:32:19 +0000 (0:00:01.147) 0:24:36.618 ******* 2026-03-25 05:32:43.185139 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:32:43.185150 | orchestrator | 2026-03-25 05:32:43.185160 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:32:43.185171 | orchestrator | Wednesday 25 March 2026 05:32:22 +0000 (0:00:02.616) 0:24:39.235 ******* 2026-03-25 05:32:43.185182 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:32:43.185193 | orchestrator | 2026-03-25 05:32:43.185203 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:32:43.185214 | orchestrator | Wednesday 25 March 2026 05:32:23 +0000 (0:00:01.166) 0:24:40.401 ******* 2026-03-25 05:32:43.185265 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-25 05:32:43.185276 | orchestrator | 2026-03-25 05:32:43.185287 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 05:32:43.185298 | orchestrator | Wednesday 25 March 2026 05:32:24 +0000 (0:00:01.138) 0:24:41.539 ******* 2026-03-25 05:32:43.185308 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185319 | orchestrator | 2026-03-25 05:32:43.185330 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 05:32:43.185351 | orchestrator | Wednesday 25 March 2026 05:32:25 +0000 (0:00:01.195) 0:24:42.735 ******* 2026-03-25 05:32:43.185362 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185372 | orchestrator | 2026-03-25 05:32:43.185383 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 05:32:43.185394 | orchestrator | Wednesday 25 March 2026 05:32:26 +0000 (0:00:01.182) 0:24:43.917 ******* 2026-03-25 05:32:43.185404 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185415 | orchestrator | 2026-03-25 05:32:43.185431 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 05:32:43.185442 | orchestrator | Wednesday 25 March 2026 05:32:28 +0000 (0:00:01.154) 0:24:45.072 ******* 2026-03-25 05:32:43.185452 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185463 | orchestrator | 2026-03-25 05:32:43.185473 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 05:32:43.185515 | orchestrator | Wednesday 25 March 2026 05:32:29 +0000 (0:00:01.185) 0:24:46.257 ******* 2026-03-25 05:32:43.185526 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185537 | orchestrator | 2026-03-25 05:32:43.185547 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 05:32:43.185558 | orchestrator | Wednesday 25 March 2026 05:32:30 +0000 (0:00:01.145) 0:24:47.403 ******* 2026-03-25 05:32:43.185568 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185579 | orchestrator | 2026-03-25 05:32:43.185589 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 05:32:43.185600 | orchestrator | Wednesday 25 March 2026 05:32:31 +0000 (0:00:01.131) 0:24:48.535 ******* 2026-03-25 05:32:43.185610 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185621 | orchestrator | 2026-03-25 05:32:43.185632 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 05:32:43.185642 | orchestrator | Wednesday 25 March 2026 05:32:32 +0000 (0:00:01.147) 0:24:49.682 ******* 2026-03-25 05:32:43.185653 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:32:43.185663 | orchestrator | 2026-03-25 05:32:43.185674 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 05:32:43.185684 | orchestrator | Wednesday 25 March 2026 05:32:33 +0000 (0:00:01.154) 0:24:50.836 ******* 2026-03-25 05:32:43.185695 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:32:43.185706 | orchestrator | 2026-03-25 05:32:43.185716 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:32:43.185727 | orchestrator | Wednesday 25 March 2026 05:32:35 +0000 (0:00:01.369) 0:24:52.206 ******* 2026-03-25 05:32:43.185737 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-25 05:32:43.185748 | orchestrator | 2026-03-25 05:32:43.185759 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 05:32:43.185769 | orchestrator | Wednesday 25 March 2026 05:32:36 +0000 (0:00:01.136) 0:24:53.343 ******* 2026-03-25 05:32:43.185780 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-25 05:32:43.185791 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-25 05:32:43.185801 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-25 05:32:43.185812 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-25 05:32:43.185823 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-25 05:32:43.185833 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-25 05:32:43.185844 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-25 05:32:43.185854 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-25 05:32:43.185865 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 05:32:43.185875 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 05:32:43.185886 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 05:32:43.185904 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 05:32:43.185915 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 05:32:43.185926 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 05:32:43.185936 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-25 05:32:43.185947 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-25 05:32:43.185958 | orchestrator | 2026-03-25 05:32:43.185975 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:33:37.807035 | orchestrator | Wednesday 25 March 2026 05:32:43 +0000 (0:00:06.837) 0:25:00.180 ******* 2026-03-25 05:33:37.807133 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807145 | orchestrator | 2026-03-25 05:33:37.807154 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:33:37.807161 | orchestrator | Wednesday 25 March 2026 05:32:44 +0000 (0:00:01.195) 0:25:01.376 ******* 2026-03-25 05:33:37.807169 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807177 | orchestrator | 2026-03-25 05:33:37.807184 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:33:37.807192 | orchestrator | Wednesday 25 March 2026 05:32:45 +0000 (0:00:01.159) 0:25:02.535 ******* 2026-03-25 05:33:37.807199 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807206 | orchestrator | 2026-03-25 05:33:37.807214 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:33:37.807221 | orchestrator | Wednesday 25 March 2026 05:32:46 +0000 (0:00:01.144) 0:25:03.680 ******* 2026-03-25 05:33:37.807228 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807236 | orchestrator | 2026-03-25 05:33:37.807243 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:33:37.807250 | orchestrator | Wednesday 25 March 2026 05:32:47 +0000 (0:00:01.125) 0:25:04.805 ******* 2026-03-25 05:33:37.807257 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807265 | orchestrator | 2026-03-25 05:33:37.807272 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:33:37.807279 | orchestrator | Wednesday 25 March 2026 05:32:48 +0000 (0:00:01.190) 0:25:05.995 ******* 2026-03-25 05:33:37.807286 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807294 | orchestrator | 2026-03-25 05:33:37.807301 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:33:37.807310 | orchestrator | Wednesday 25 March 2026 05:32:50 +0000 (0:00:01.164) 0:25:07.159 ******* 2026-03-25 05:33:37.807317 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807325 | orchestrator | 2026-03-25 05:33:37.807349 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:33:37.807362 | orchestrator | Wednesday 25 March 2026 05:32:51 +0000 (0:00:01.122) 0:25:08.281 ******* 2026-03-25 05:33:37.807373 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807385 | orchestrator | 2026-03-25 05:33:37.807396 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:33:37.807408 | orchestrator | Wednesday 25 March 2026 05:32:52 +0000 (0:00:01.133) 0:25:09.415 ******* 2026-03-25 05:33:37.807420 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807432 | orchestrator | 2026-03-25 05:33:37.807443 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:33:37.807454 | orchestrator | Wednesday 25 March 2026 05:32:53 +0000 (0:00:01.133) 0:25:10.549 ******* 2026-03-25 05:33:37.807466 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807478 | orchestrator | 2026-03-25 05:33:37.807490 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:33:37.807502 | orchestrator | Wednesday 25 March 2026 05:32:54 +0000 (0:00:01.139) 0:25:11.689 ******* 2026-03-25 05:33:37.807513 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807564 | orchestrator | 2026-03-25 05:33:37.807604 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:33:37.807619 | orchestrator | Wednesday 25 March 2026 05:32:55 +0000 (0:00:01.225) 0:25:12.914 ******* 2026-03-25 05:33:37.807632 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807645 | orchestrator | 2026-03-25 05:33:37.807658 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:33:37.807669 | orchestrator | Wednesday 25 March 2026 05:32:57 +0000 (0:00:01.138) 0:25:14.053 ******* 2026-03-25 05:33:37.807681 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807693 | orchestrator | 2026-03-25 05:33:37.807706 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:33:37.807718 | orchestrator | Wednesday 25 March 2026 05:32:58 +0000 (0:00:01.244) 0:25:15.297 ******* 2026-03-25 05:33:37.807731 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807742 | orchestrator | 2026-03-25 05:33:37.807754 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:33:37.807766 | orchestrator | Wednesday 25 March 2026 05:32:59 +0000 (0:00:01.162) 0:25:16.459 ******* 2026-03-25 05:33:37.807777 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807789 | orchestrator | 2026-03-25 05:33:37.807802 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:33:37.807813 | orchestrator | Wednesday 25 March 2026 05:33:00 +0000 (0:00:01.228) 0:25:17.688 ******* 2026-03-25 05:33:37.807824 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807837 | orchestrator | 2026-03-25 05:33:37.807848 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:33:37.807859 | orchestrator | Wednesday 25 March 2026 05:33:01 +0000 (0:00:01.139) 0:25:18.827 ******* 2026-03-25 05:33:37.807871 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807883 | orchestrator | 2026-03-25 05:33:37.807895 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:33:37.807909 | orchestrator | Wednesday 25 March 2026 05:33:03 +0000 (0:00:01.218) 0:25:20.046 ******* 2026-03-25 05:33:37.807922 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807934 | orchestrator | 2026-03-25 05:33:37.807948 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:33:37.807960 | orchestrator | Wednesday 25 March 2026 05:33:04 +0000 (0:00:01.142) 0:25:21.189 ******* 2026-03-25 05:33:37.807972 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.807983 | orchestrator | 2026-03-25 05:33:37.807994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:33:37.808006 | orchestrator | Wednesday 25 March 2026 05:33:05 +0000 (0:00:01.172) 0:25:22.361 ******* 2026-03-25 05:33:37.808017 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.808029 | orchestrator | 2026-03-25 05:33:37.808062 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:33:37.808076 | orchestrator | Wednesday 25 March 2026 05:33:06 +0000 (0:00:01.127) 0:25:23.489 ******* 2026-03-25 05:33:37.808087 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.808098 | orchestrator | 2026-03-25 05:33:37.808110 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:33:37.808121 | orchestrator | Wednesday 25 March 2026 05:33:07 +0000 (0:00:01.108) 0:25:24.597 ******* 2026-03-25 05:33:37.808132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:33:37.808144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 05:33:37.808156 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:33:37.808167 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.808178 | orchestrator | 2026-03-25 05:33:37.808190 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:33:37.808201 | orchestrator | Wednesday 25 March 2026 05:33:09 +0000 (0:00:01.796) 0:25:26.394 ******* 2026-03-25 05:33:37.808213 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:33:37.808240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 05:33:37.808251 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:33:37.808262 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.808273 | orchestrator | 2026-03-25 05:33:37.808284 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:33:37.808296 | orchestrator | Wednesday 25 March 2026 05:33:11 +0000 (0:00:01.795) 0:25:28.190 ******* 2026-03-25 05:33:37.808308 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:33:37.808320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-25 05:33:37.808333 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:33:37.808345 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.808357 | orchestrator | 2026-03-25 05:33:37.808378 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:33:37.808390 | orchestrator | Wednesday 25 March 2026 05:33:13 +0000 (0:00:01.869) 0:25:30.059 ******* 2026-03-25 05:33:37.808403 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.808416 | orchestrator | 2026-03-25 05:33:37.808427 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:33:37.808439 | orchestrator | Wednesday 25 March 2026 05:33:14 +0000 (0:00:01.164) 0:25:31.224 ******* 2026-03-25 05:33:37.808452 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-25 05:33:37.808467 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.808479 | orchestrator | 2026-03-25 05:33:37.808491 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:33:37.808503 | orchestrator | Wednesday 25 March 2026 05:33:15 +0000 (0:00:01.332) 0:25:32.557 ******* 2026-03-25 05:33:37.808516 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:33:37.808556 | orchestrator | 2026-03-25 05:33:37.808564 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:33:37.808572 | orchestrator | Wednesday 25 March 2026 05:33:17 +0000 (0:00:01.781) 0:25:34.339 ******* 2026-03-25 05:33:37.808579 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:33:37.808586 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:33:37.808595 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:33:37.808602 | orchestrator | 2026-03-25 05:33:37.808609 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-25 05:33:37.808616 | orchestrator | Wednesday 25 March 2026 05:33:19 +0000 (0:00:01.764) 0:25:36.103 ******* 2026-03-25 05:33:37.808623 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-03-25 05:33:37.808631 | orchestrator | 2026-03-25 05:33:37.808638 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-25 05:33:37.808645 | orchestrator | Wednesday 25 March 2026 05:33:20 +0000 (0:00:01.482) 0:25:37.585 ******* 2026-03-25 05:33:37.808652 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:33:37.808659 | orchestrator | 2026-03-25 05:33:37.808666 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-25 05:33:37.808673 | orchestrator | Wednesday 25 March 2026 05:33:22 +0000 (0:00:01.500) 0:25:39.086 ******* 2026-03-25 05:33:37.808680 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:33:37.808687 | orchestrator | 2026-03-25 05:33:37.808694 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-25 05:33:37.808702 | orchestrator | Wednesday 25 March 2026 05:33:23 +0000 (0:00:01.128) 0:25:40.214 ******* 2026-03-25 05:33:37.808709 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-25 05:33:37.808716 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-25 05:33:37.808723 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-25 05:33:37.808730 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-25 05:33:37.808737 | orchestrator | 2026-03-25 05:33:37.808753 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-25 05:33:37.808760 | orchestrator | Wednesday 25 March 2026 05:33:30 +0000 (0:00:07.784) 0:25:47.999 ******* 2026-03-25 05:33:37.808767 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:33:37.808774 | orchestrator | 2026-03-25 05:33:37.808781 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-25 05:33:37.808788 | orchestrator | Wednesday 25 March 2026 05:33:32 +0000 (0:00:01.202) 0:25:49.202 ******* 2026-03-25 05:33:37.808796 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-25 05:33:37.808803 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 05:33:37.808810 | orchestrator | 2026-03-25 05:33:37.808817 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-25 05:33:37.808824 | orchestrator | Wednesday 25 March 2026 05:33:35 +0000 (0:00:03.634) 0:25:52.836 ******* 2026-03-25 05:33:37.808842 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-25 05:34:34.611865 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-25 05:34:34.611980 | orchestrator | 2026-03-25 05:34:34.611995 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-25 05:34:34.612009 | orchestrator | Wednesday 25 March 2026 05:33:37 +0000 (0:00:01.972) 0:25:54.809 ******* 2026-03-25 05:34:34.612020 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:34:34.612031 | orchestrator | 2026-03-25 05:34:34.612042 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-25 05:34:34.612053 | orchestrator | Wednesday 25 March 2026 05:33:39 +0000 (0:00:01.589) 0:25:56.399 ******* 2026-03-25 05:34:34.612065 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:34:34.612076 | orchestrator | 2026-03-25 05:34:34.612087 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-25 05:34:34.612097 | orchestrator | Wednesday 25 March 2026 05:33:40 +0000 (0:00:01.200) 0:25:57.599 ******* 2026-03-25 05:34:34.612108 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:34:34.612118 | orchestrator | 2026-03-25 05:34:34.612129 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-25 05:34:34.612140 | orchestrator | Wednesday 25 March 2026 05:33:41 +0000 (0:00:01.121) 0:25:58.721 ******* 2026-03-25 05:34:34.612151 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-03-25 05:34:34.612162 | orchestrator | 2026-03-25 05:34:34.612173 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-25 05:34:34.612183 | orchestrator | Wednesday 25 March 2026 05:33:43 +0000 (0:00:01.479) 0:26:00.200 ******* 2026-03-25 05:34:34.612194 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:34:34.612205 | orchestrator | 2026-03-25 05:34:34.612215 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-25 05:34:34.612226 | orchestrator | Wednesday 25 March 2026 05:33:44 +0000 (0:00:01.177) 0:26:01.378 ******* 2026-03-25 05:34:34.612236 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:34:34.612247 | orchestrator | 2026-03-25 05:34:34.612273 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-25 05:34:34.612284 | orchestrator | Wednesday 25 March 2026 05:33:45 +0000 (0:00:01.221) 0:26:02.600 ******* 2026-03-25 05:34:34.612294 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-03-25 05:34:34.612305 | orchestrator | 2026-03-25 05:34:34.612315 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-25 05:34:34.612326 | orchestrator | Wednesday 25 March 2026 05:33:47 +0000 (0:00:01.474) 0:26:04.074 ******* 2026-03-25 05:34:34.612337 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:34:34.612347 | orchestrator | 2026-03-25 05:34:34.612358 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-25 05:34:34.612368 | orchestrator | Wednesday 25 March 2026 05:33:49 +0000 (0:00:02.052) 0:26:06.127 ******* 2026-03-25 05:34:34.612379 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:34:34.612390 | orchestrator | 2026-03-25 05:34:34.612400 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-25 05:34:34.612434 | orchestrator | Wednesday 25 March 2026 05:33:51 +0000 (0:00:01.984) 0:26:08.112 ******* 2026-03-25 05:34:34.612447 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:34:34.612459 | orchestrator | 2026-03-25 05:34:34.612471 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-25 05:34:34.612483 | orchestrator | Wednesday 25 March 2026 05:33:53 +0000 (0:00:02.337) 0:26:10.450 ******* 2026-03-25 05:34:34.612495 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:34:34.612506 | orchestrator | 2026-03-25 05:34:34.612518 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-25 05:34:34.612530 | orchestrator | Wednesday 25 March 2026 05:33:57 +0000 (0:00:03.597) 0:26:14.047 ******* 2026-03-25 05:34:34.612542 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:34:34.612560 | orchestrator | 2026-03-25 05:34:34.612599 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-25 05:34:34.612621 | orchestrator | 2026-03-25 05:34:34.612641 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-25 05:34:34.612660 | orchestrator | Wednesday 25 March 2026 05:33:58 +0000 (0:00:01.015) 0:26:15.063 ******* 2026-03-25 05:34:34.612679 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:34:34.612692 | orchestrator | 2026-03-25 05:34:34.612704 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-25 05:34:34.612716 | orchestrator | Wednesday 25 March 2026 05:34:10 +0000 (0:00:12.647) 0:26:27.710 ******* 2026-03-25 05:34:34.612729 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:34:34.612741 | orchestrator | 2026-03-25 05:34:34.612753 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:34:34.612763 | orchestrator | Wednesday 25 March 2026 05:34:12 +0000 (0:00:02.161) 0:26:29.872 ******* 2026-03-25 05:34:34.612774 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-25 05:34:34.612785 | orchestrator | 2026-03-25 05:34:34.612795 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:34:34.612806 | orchestrator | Wednesday 25 March 2026 05:34:14 +0000 (0:00:01.149) 0:26:31.021 ******* 2026-03-25 05:34:34.612817 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:34.612827 | orchestrator | 2026-03-25 05:34:34.612838 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:34:34.612848 | orchestrator | Wednesday 25 March 2026 05:34:15 +0000 (0:00:01.508) 0:26:32.530 ******* 2026-03-25 05:34:34.612859 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:34.612869 | orchestrator | 2026-03-25 05:34:34.612880 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:34:34.612891 | orchestrator | Wednesday 25 March 2026 05:34:16 +0000 (0:00:01.152) 0:26:33.682 ******* 2026-03-25 05:34:34.612901 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:34.612912 | orchestrator | 2026-03-25 05:34:34.612922 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:34:34.612933 | orchestrator | Wednesday 25 March 2026 05:34:18 +0000 (0:00:01.454) 0:26:35.137 ******* 2026-03-25 05:34:34.612943 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:34.612954 | orchestrator | 2026-03-25 05:34:34.612981 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:34:34.612992 | orchestrator | Wednesday 25 March 2026 05:34:19 +0000 (0:00:01.161) 0:26:36.298 ******* 2026-03-25 05:34:34.613003 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:34.613013 | orchestrator | 2026-03-25 05:34:34.613024 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:34:34.613035 | orchestrator | Wednesday 25 March 2026 05:34:20 +0000 (0:00:01.160) 0:26:37.459 ******* 2026-03-25 05:34:34.613045 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:34.613056 | orchestrator | 2026-03-25 05:34:34.613066 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:34:34.613077 | orchestrator | Wednesday 25 March 2026 05:34:21 +0000 (0:00:01.157) 0:26:38.616 ******* 2026-03-25 05:34:34.613096 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:34.613107 | orchestrator | 2026-03-25 05:34:34.613118 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:34:34.613128 | orchestrator | Wednesday 25 March 2026 05:34:22 +0000 (0:00:01.211) 0:26:39.828 ******* 2026-03-25 05:34:34.613139 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:34.613150 | orchestrator | 2026-03-25 05:34:34.613160 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:34:34.613171 | orchestrator | Wednesday 25 March 2026 05:34:23 +0000 (0:00:01.153) 0:26:40.981 ******* 2026-03-25 05:34:34.613182 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:34:34.613192 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:34:34.613203 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:34:34.613214 | orchestrator | 2026-03-25 05:34:34.613224 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:34:34.613235 | orchestrator | Wednesday 25 March 2026 05:34:25 +0000 (0:00:01.803) 0:26:42.785 ******* 2026-03-25 05:34:34.613245 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:34.613256 | orchestrator | 2026-03-25 05:34:34.613273 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:34:34.613284 | orchestrator | Wednesday 25 March 2026 05:34:27 +0000 (0:00:01.276) 0:26:44.062 ******* 2026-03-25 05:34:34.613294 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:34:34.613305 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:34:34.613316 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:34:34.613326 | orchestrator | 2026-03-25 05:34:34.613336 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:34:34.613347 | orchestrator | Wednesday 25 March 2026 05:34:29 +0000 (0:00:02.944) 0:26:47.006 ******* 2026-03-25 05:34:34.613358 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 05:34:34.613368 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 05:34:34.613379 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 05:34:34.613389 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:34.613400 | orchestrator | 2026-03-25 05:34:34.613411 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:34:34.613421 | orchestrator | Wednesday 25 March 2026 05:34:31 +0000 (0:00:01.455) 0:26:48.462 ******* 2026-03-25 05:34:34.613433 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:34:34.613447 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:34:34.613458 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:34:34.613469 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:34.613480 | orchestrator | 2026-03-25 05:34:34.613491 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:34:34.613502 | orchestrator | Wednesday 25 March 2026 05:34:33 +0000 (0:00:01.982) 0:26:50.445 ******* 2026-03-25 05:34:34.613515 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:34.613535 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:34.613554 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:54.708135 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.708253 | orchestrator | 2026-03-25 05:34:54.708271 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:34:54.708284 | orchestrator | Wednesday 25 March 2026 05:34:34 +0000 (0:00:01.166) 0:26:51.611 ******* 2026-03-25 05:34:54.708388 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:34:27.555133', 'end': '2026-03-25 05:34:27.602727', 'delta': '0:00:00.047594', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:34:54.708410 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:34:28.098112', 'end': '2026-03-25 05:34:28.149655', 'delta': '0:00:00.051543', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:34:54.708422 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:34:28.674541', 'end': '2026-03-25 05:34:28.720853', 'delta': '0:00:00.046312', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:34:54.708433 | orchestrator | 2026-03-25 05:34:54.708445 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:34:54.708456 | orchestrator | Wednesday 25 March 2026 05:34:35 +0000 (0:00:01.208) 0:26:52.820 ******* 2026-03-25 05:34:54.708466 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:54.708501 | orchestrator | 2026-03-25 05:34:54.708512 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:34:54.708523 | orchestrator | Wednesday 25 March 2026 05:34:37 +0000 (0:00:01.348) 0:26:54.169 ******* 2026-03-25 05:34:54.708534 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.708544 | orchestrator | 2026-03-25 05:34:54.708555 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:34:54.708566 | orchestrator | Wednesday 25 March 2026 05:34:38 +0000 (0:00:01.242) 0:26:55.411 ******* 2026-03-25 05:34:54.708576 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:54.708587 | orchestrator | 2026-03-25 05:34:54.708633 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:34:54.708653 | orchestrator | Wednesday 25 March 2026 05:34:39 +0000 (0:00:01.239) 0:26:56.650 ******* 2026-03-25 05:34:54.708670 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:34:54.708683 | orchestrator | 2026-03-25 05:34:54.708695 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:34:54.708707 | orchestrator | Wednesday 25 March 2026 05:34:41 +0000 (0:00:01.946) 0:26:58.597 ******* 2026-03-25 05:34:54.708720 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:34:54.708731 | orchestrator | 2026-03-25 05:34:54.708744 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:34:54.708756 | orchestrator | Wednesday 25 March 2026 05:34:42 +0000 (0:00:01.164) 0:26:59.761 ******* 2026-03-25 05:34:54.708768 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.708780 | orchestrator | 2026-03-25 05:34:54.708792 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:34:54.708804 | orchestrator | Wednesday 25 March 2026 05:34:43 +0000 (0:00:01.227) 0:27:00.989 ******* 2026-03-25 05:34:54.708816 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.708828 | orchestrator | 2026-03-25 05:34:54.708840 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:34:54.708853 | orchestrator | Wednesday 25 March 2026 05:34:45 +0000 (0:00:01.279) 0:27:02.268 ******* 2026-03-25 05:34:54.708865 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.708878 | orchestrator | 2026-03-25 05:34:54.708910 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:34:54.708922 | orchestrator | Wednesday 25 March 2026 05:34:46 +0000 (0:00:01.176) 0:27:03.445 ******* 2026-03-25 05:34:54.708933 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.708943 | orchestrator | 2026-03-25 05:34:54.708954 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:34:54.708965 | orchestrator | Wednesday 25 March 2026 05:34:47 +0000 (0:00:01.143) 0:27:04.589 ******* 2026-03-25 05:34:54.708976 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.708987 | orchestrator | 2026-03-25 05:34:54.708997 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:34:54.709008 | orchestrator | Wednesday 25 March 2026 05:34:48 +0000 (0:00:01.133) 0:27:05.722 ******* 2026-03-25 05:34:54.709018 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.709029 | orchestrator | 2026-03-25 05:34:54.709039 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:34:54.709050 | orchestrator | Wednesday 25 March 2026 05:34:49 +0000 (0:00:01.164) 0:27:06.887 ******* 2026-03-25 05:34:54.709060 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.709071 | orchestrator | 2026-03-25 05:34:54.709082 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:34:54.709101 | orchestrator | Wednesday 25 March 2026 05:34:51 +0000 (0:00:01.194) 0:27:08.082 ******* 2026-03-25 05:34:54.709112 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.709122 | orchestrator | 2026-03-25 05:34:54.709133 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:34:54.709144 | orchestrator | Wednesday 25 March 2026 05:34:52 +0000 (0:00:01.199) 0:27:09.282 ******* 2026-03-25 05:34:54.709164 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:54.709175 | orchestrator | 2026-03-25 05:34:54.709186 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:34:54.709196 | orchestrator | Wednesday 25 March 2026 05:34:53 +0000 (0:00:01.174) 0:27:10.456 ******* 2026-03-25 05:34:54.709208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:34:54.709223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:34:54.709234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:34:54.709247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:34:54.709260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:34:54.709271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:34:54.709290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:34:56.027019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2a85f599', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:34:56.027148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:34:56.027167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:34:56.027180 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:34:56.027194 | orchestrator | 2026-03-25 05:34:56.027206 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:34:56.027218 | orchestrator | Wednesday 25 March 2026 05:34:54 +0000 (0:00:01.248) 0:27:11.704 ******* 2026-03-25 05:34:56.027231 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:56.027263 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:56.027290 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:56.027303 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:56.027316 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:56.027327 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:56.027338 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:34:56.027367 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2a85f599', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a85f599-c628-4cff-bf05-087f83983aef-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:35:30.244214 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:35:30.244330 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:35:30.244346 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.244361 | orchestrator | 2026-03-25 05:35:30.244373 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:35:30.244385 | orchestrator | Wednesday 25 March 2026 05:34:56 +0000 (0:00:01.326) 0:27:13.030 ******* 2026-03-25 05:35:30.244396 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:35:30.244408 | orchestrator | 2026-03-25 05:35:30.244419 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:35:30.244429 | orchestrator | Wednesday 25 March 2026 05:34:57 +0000 (0:00:01.498) 0:27:14.529 ******* 2026-03-25 05:35:30.244440 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:35:30.244451 | orchestrator | 2026-03-25 05:35:30.244461 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:35:30.244472 | orchestrator | Wednesday 25 March 2026 05:34:58 +0000 (0:00:01.123) 0:27:15.653 ******* 2026-03-25 05:35:30.244483 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:35:30.244517 | orchestrator | 2026-03-25 05:35:30.244528 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:35:30.244539 | orchestrator | Wednesday 25 March 2026 05:35:00 +0000 (0:00:01.524) 0:27:17.177 ******* 2026-03-25 05:35:30.244556 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.244575 | orchestrator | 2026-03-25 05:35:30.244594 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:35:30.244613 | orchestrator | Wednesday 25 March 2026 05:35:01 +0000 (0:00:01.126) 0:27:18.304 ******* 2026-03-25 05:35:30.244657 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.244675 | orchestrator | 2026-03-25 05:35:30.244692 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:35:30.244709 | orchestrator | Wednesday 25 March 2026 05:35:02 +0000 (0:00:01.248) 0:27:19.553 ******* 2026-03-25 05:35:30.244725 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.244741 | orchestrator | 2026-03-25 05:35:30.244758 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:35:30.244775 | orchestrator | Wednesday 25 March 2026 05:35:03 +0000 (0:00:01.142) 0:27:20.696 ******* 2026-03-25 05:35:30.244792 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-25 05:35:30.244809 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:35:30.244827 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-25 05:35:30.244844 | orchestrator | 2026-03-25 05:35:30.244882 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:35:30.244902 | orchestrator | Wednesday 25 March 2026 05:35:05 +0000 (0:00:01.705) 0:27:22.402 ******* 2026-03-25 05:35:30.244920 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-25 05:35:30.244939 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-25 05:35:30.244956 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-25 05:35:30.244975 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.244994 | orchestrator | 2026-03-25 05:35:30.245014 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:35:30.245034 | orchestrator | Wednesday 25 March 2026 05:35:06 +0000 (0:00:01.162) 0:27:23.564 ******* 2026-03-25 05:35:30.245049 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.245061 | orchestrator | 2026-03-25 05:35:30.245074 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:35:30.245086 | orchestrator | Wednesday 25 March 2026 05:35:07 +0000 (0:00:01.138) 0:27:24.703 ******* 2026-03-25 05:35:30.245096 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:35:30.245107 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:35:30.245118 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:35:30.245129 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:35:30.245139 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:35:30.245150 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:35:30.245180 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:35:30.245192 | orchestrator | 2026-03-25 05:35:30.245203 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:35:30.245213 | orchestrator | Wednesday 25 March 2026 05:35:09 +0000 (0:00:02.067) 0:27:26.771 ******* 2026-03-25 05:35:30.245224 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:35:30.245234 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:35:30.245245 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:35:30.245256 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:35:30.245278 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:35:30.245289 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:35:30.245299 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:35:30.245310 | orchestrator | 2026-03-25 05:35:30.245320 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:35:30.245331 | orchestrator | Wednesday 25 March 2026 05:35:11 +0000 (0:00:02.124) 0:27:28.896 ******* 2026-03-25 05:35:30.245341 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-25 05:35:30.245379 | orchestrator | 2026-03-25 05:35:30.245391 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 05:35:30.245401 | orchestrator | Wednesday 25 March 2026 05:35:13 +0000 (0:00:01.146) 0:27:30.043 ******* 2026-03-25 05:35:30.245412 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-25 05:35:30.245423 | orchestrator | 2026-03-25 05:35:30.245434 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 05:35:30.245445 | orchestrator | Wednesday 25 March 2026 05:35:14 +0000 (0:00:01.091) 0:27:31.134 ******* 2026-03-25 05:35:30.245456 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:35:30.245466 | orchestrator | 2026-03-25 05:35:30.245477 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 05:35:30.245488 | orchestrator | Wednesday 25 March 2026 05:35:15 +0000 (0:00:01.556) 0:27:32.691 ******* 2026-03-25 05:35:30.245498 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.245509 | orchestrator | 2026-03-25 05:35:30.245520 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 05:35:30.245531 | orchestrator | Wednesday 25 March 2026 05:35:16 +0000 (0:00:01.159) 0:27:33.850 ******* 2026-03-25 05:35:30.245541 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.245552 | orchestrator | 2026-03-25 05:35:30.245563 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 05:35:30.245573 | orchestrator | Wednesday 25 March 2026 05:35:17 +0000 (0:00:01.144) 0:27:34.995 ******* 2026-03-25 05:35:30.245584 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.245595 | orchestrator | 2026-03-25 05:35:30.245605 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 05:35:30.245616 | orchestrator | Wednesday 25 March 2026 05:35:19 +0000 (0:00:01.142) 0:27:36.137 ******* 2026-03-25 05:35:30.245673 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:35:30.245685 | orchestrator | 2026-03-25 05:35:30.245695 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 05:35:30.245705 | orchestrator | Wednesday 25 March 2026 05:35:20 +0000 (0:00:01.554) 0:27:37.691 ******* 2026-03-25 05:35:30.245716 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.245727 | orchestrator | 2026-03-25 05:35:30.245737 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 05:35:30.245748 | orchestrator | Wednesday 25 March 2026 05:35:21 +0000 (0:00:01.117) 0:27:38.809 ******* 2026-03-25 05:35:30.245758 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.245769 | orchestrator | 2026-03-25 05:35:30.245779 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 05:35:30.245797 | orchestrator | Wednesday 25 March 2026 05:35:22 +0000 (0:00:01.172) 0:27:39.981 ******* 2026-03-25 05:35:30.245808 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:35:30.245819 | orchestrator | 2026-03-25 05:35:30.245829 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 05:35:30.245840 | orchestrator | Wednesday 25 March 2026 05:35:24 +0000 (0:00:01.631) 0:27:41.613 ******* 2026-03-25 05:35:30.245850 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:35:30.245861 | orchestrator | 2026-03-25 05:35:30.245872 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 05:35:30.245890 | orchestrator | Wednesday 25 March 2026 05:35:26 +0000 (0:00:01.637) 0:27:43.251 ******* 2026-03-25 05:35:30.245901 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.245911 | orchestrator | 2026-03-25 05:35:30.245922 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:35:30.245932 | orchestrator | Wednesday 25 March 2026 05:35:27 +0000 (0:00:00.827) 0:27:44.079 ******* 2026-03-25 05:35:30.245943 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:35:30.245953 | orchestrator | 2026-03-25 05:35:30.245964 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:35:30.245974 | orchestrator | Wednesday 25 March 2026 05:35:27 +0000 (0:00:00.808) 0:27:44.888 ******* 2026-03-25 05:35:30.245985 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.245995 | orchestrator | 2026-03-25 05:35:30.246006 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:35:30.246084 | orchestrator | Wednesday 25 March 2026 05:35:28 +0000 (0:00:00.781) 0:27:45.669 ******* 2026-03-25 05:35:30.246098 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:35:30.246109 | orchestrator | 2026-03-25 05:35:30.246120 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:35:30.246130 | orchestrator | Wednesday 25 March 2026 05:35:29 +0000 (0:00:00.799) 0:27:46.469 ******* 2026-03-25 05:35:30.246151 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.294479 | orchestrator | 2026-03-25 05:36:11.294598 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:36:11.294616 | orchestrator | Wednesday 25 March 2026 05:35:30 +0000 (0:00:00.779) 0:27:47.248 ******* 2026-03-25 05:36:11.294629 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.294642 | orchestrator | 2026-03-25 05:36:11.294653 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:36:11.294717 | orchestrator | Wednesday 25 March 2026 05:35:31 +0000 (0:00:00.807) 0:27:48.056 ******* 2026-03-25 05:36:11.294728 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.294739 | orchestrator | 2026-03-25 05:36:11.294750 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:36:11.294761 | orchestrator | Wednesday 25 March 2026 05:35:31 +0000 (0:00:00.809) 0:27:48.865 ******* 2026-03-25 05:36:11.294772 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:11.294784 | orchestrator | 2026-03-25 05:36:11.294795 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:36:11.294806 | orchestrator | Wednesday 25 March 2026 05:35:32 +0000 (0:00:00.835) 0:27:49.701 ******* 2026-03-25 05:36:11.294817 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:11.294828 | orchestrator | 2026-03-25 05:36:11.294839 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:36:11.294850 | orchestrator | Wednesday 25 March 2026 05:35:33 +0000 (0:00:00.859) 0:27:50.561 ******* 2026-03-25 05:36:11.294860 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:11.294871 | orchestrator | 2026-03-25 05:36:11.294882 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:36:11.294893 | orchestrator | Wednesday 25 March 2026 05:35:34 +0000 (0:00:00.812) 0:27:51.373 ******* 2026-03-25 05:36:11.294904 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.294915 | orchestrator | 2026-03-25 05:36:11.294926 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:36:11.294937 | orchestrator | Wednesday 25 March 2026 05:35:35 +0000 (0:00:00.801) 0:27:52.175 ******* 2026-03-25 05:36:11.294948 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.294958 | orchestrator | 2026-03-25 05:36:11.294970 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:36:11.294982 | orchestrator | Wednesday 25 March 2026 05:35:35 +0000 (0:00:00.805) 0:27:52.981 ******* 2026-03-25 05:36:11.294993 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295004 | orchestrator | 2026-03-25 05:36:11.295016 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:36:11.295054 | orchestrator | Wednesday 25 March 2026 05:35:36 +0000 (0:00:00.895) 0:27:53.877 ******* 2026-03-25 05:36:11.295067 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295080 | orchestrator | 2026-03-25 05:36:11.295092 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:36:11.295105 | orchestrator | Wednesday 25 March 2026 05:35:37 +0000 (0:00:00.814) 0:27:54.691 ******* 2026-03-25 05:36:11.295117 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295129 | orchestrator | 2026-03-25 05:36:11.295142 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:36:11.295154 | orchestrator | Wednesday 25 March 2026 05:35:38 +0000 (0:00:00.780) 0:27:55.472 ******* 2026-03-25 05:36:11.295166 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295179 | orchestrator | 2026-03-25 05:36:11.295191 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:36:11.295204 | orchestrator | Wednesday 25 March 2026 05:35:39 +0000 (0:00:00.766) 0:27:56.238 ******* 2026-03-25 05:36:11.295216 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295228 | orchestrator | 2026-03-25 05:36:11.295240 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:36:11.295252 | orchestrator | Wednesday 25 March 2026 05:35:40 +0000 (0:00:00.784) 0:27:57.023 ******* 2026-03-25 05:36:11.295264 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295276 | orchestrator | 2026-03-25 05:36:11.295289 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:36:11.295301 | orchestrator | Wednesday 25 March 2026 05:35:40 +0000 (0:00:00.796) 0:27:57.819 ******* 2026-03-25 05:36:11.295330 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295343 | orchestrator | 2026-03-25 05:36:11.295355 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:36:11.295368 | orchestrator | Wednesday 25 March 2026 05:35:41 +0000 (0:00:00.773) 0:27:58.592 ******* 2026-03-25 05:36:11.295380 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295391 | orchestrator | 2026-03-25 05:36:11.295401 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:36:11.295412 | orchestrator | Wednesday 25 March 2026 05:35:42 +0000 (0:00:00.822) 0:27:59.415 ******* 2026-03-25 05:36:11.295423 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295434 | orchestrator | 2026-03-25 05:36:11.295445 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:36:11.295456 | orchestrator | Wednesday 25 March 2026 05:35:43 +0000 (0:00:00.786) 0:28:00.201 ******* 2026-03-25 05:36:11.295467 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295478 | orchestrator | 2026-03-25 05:36:11.295489 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:36:11.295499 | orchestrator | Wednesday 25 March 2026 05:35:43 +0000 (0:00:00.763) 0:28:00.965 ******* 2026-03-25 05:36:11.295510 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:11.295521 | orchestrator | 2026-03-25 05:36:11.295532 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:36:11.295543 | orchestrator | Wednesday 25 March 2026 05:35:45 +0000 (0:00:01.650) 0:28:02.616 ******* 2026-03-25 05:36:11.295554 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:11.295564 | orchestrator | 2026-03-25 05:36:11.295575 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:36:11.295586 | orchestrator | Wednesday 25 March 2026 05:35:47 +0000 (0:00:02.183) 0:28:04.799 ******* 2026-03-25 05:36:11.295597 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-25 05:36:11.295609 | orchestrator | 2026-03-25 05:36:11.295639 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 05:36:11.295651 | orchestrator | Wednesday 25 March 2026 05:35:49 +0000 (0:00:01.237) 0:28:06.037 ******* 2026-03-25 05:36:11.295694 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295705 | orchestrator | 2026-03-25 05:36:11.295725 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 05:36:11.295736 | orchestrator | Wednesday 25 March 2026 05:35:50 +0000 (0:00:01.147) 0:28:07.184 ******* 2026-03-25 05:36:11.295747 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295758 | orchestrator | 2026-03-25 05:36:11.295768 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 05:36:11.295779 | orchestrator | Wednesday 25 March 2026 05:35:51 +0000 (0:00:01.142) 0:28:08.328 ******* 2026-03-25 05:36:11.295790 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 05:36:11.295801 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 05:36:11.295812 | orchestrator | 2026-03-25 05:36:11.295823 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 05:36:11.295833 | orchestrator | Wednesday 25 March 2026 05:35:53 +0000 (0:00:01.814) 0:28:10.142 ******* 2026-03-25 05:36:11.295844 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:11.295855 | orchestrator | 2026-03-25 05:36:11.295866 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 05:36:11.295877 | orchestrator | Wednesday 25 March 2026 05:35:54 +0000 (0:00:01.527) 0:28:11.670 ******* 2026-03-25 05:36:11.295888 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295899 | orchestrator | 2026-03-25 05:36:11.295910 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 05:36:11.295920 | orchestrator | Wednesday 25 March 2026 05:35:55 +0000 (0:00:01.239) 0:28:12.910 ******* 2026-03-25 05:36:11.295931 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295942 | orchestrator | 2026-03-25 05:36:11.295953 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:36:11.295963 | orchestrator | Wednesday 25 March 2026 05:35:56 +0000 (0:00:00.782) 0:28:13.692 ******* 2026-03-25 05:36:11.295974 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.295985 | orchestrator | 2026-03-25 05:36:11.295996 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:36:11.296007 | orchestrator | Wednesday 25 March 2026 05:35:57 +0000 (0:00:00.789) 0:28:14.482 ******* 2026-03-25 05:36:11.296018 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-25 05:36:11.296028 | orchestrator | 2026-03-25 05:36:11.296039 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 05:36:11.296050 | orchestrator | Wednesday 25 March 2026 05:35:58 +0000 (0:00:01.145) 0:28:15.628 ******* 2026-03-25 05:36:11.296062 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:11.296072 | orchestrator | 2026-03-25 05:36:11.296083 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 05:36:11.296094 | orchestrator | Wednesday 25 March 2026 05:36:00 +0000 (0:00:01.738) 0:28:17.366 ******* 2026-03-25 05:36:11.296105 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 05:36:11.296116 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 05:36:11.296127 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 05:36:11.296138 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.296149 | orchestrator | 2026-03-25 05:36:11.296160 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 05:36:11.296170 | orchestrator | Wednesday 25 March 2026 05:36:01 +0000 (0:00:01.158) 0:28:18.525 ******* 2026-03-25 05:36:11.296181 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.296192 | orchestrator | 2026-03-25 05:36:11.296203 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 05:36:11.296214 | orchestrator | Wednesday 25 March 2026 05:36:02 +0000 (0:00:01.117) 0:28:19.642 ******* 2026-03-25 05:36:11.296230 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.296241 | orchestrator | 2026-03-25 05:36:11.296252 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 05:36:11.296270 | orchestrator | Wednesday 25 March 2026 05:36:03 +0000 (0:00:01.202) 0:28:20.845 ******* 2026-03-25 05:36:11.296281 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.296292 | orchestrator | 2026-03-25 05:36:11.296303 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 05:36:11.296314 | orchestrator | Wednesday 25 March 2026 05:36:05 +0000 (0:00:01.197) 0:28:22.043 ******* 2026-03-25 05:36:11.296324 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.296335 | orchestrator | 2026-03-25 05:36:11.296346 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 05:36:11.296357 | orchestrator | Wednesday 25 March 2026 05:36:06 +0000 (0:00:01.204) 0:28:23.248 ******* 2026-03-25 05:36:11.296368 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:11.296379 | orchestrator | 2026-03-25 05:36:11.296390 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:36:11.296400 | orchestrator | Wednesday 25 March 2026 05:36:07 +0000 (0:00:00.803) 0:28:24.052 ******* 2026-03-25 05:36:11.296411 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:11.296422 | orchestrator | 2026-03-25 05:36:11.296433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:36:11.296444 | orchestrator | Wednesday 25 March 2026 05:36:09 +0000 (0:00:02.222) 0:28:26.274 ******* 2026-03-25 05:36:11.296454 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:11.296465 | orchestrator | 2026-03-25 05:36:11.296476 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:36:11.296487 | orchestrator | Wednesday 25 March 2026 05:36:10 +0000 (0:00:00.779) 0:28:27.054 ******* 2026-03-25 05:36:11.296498 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-25 05:36:11.296509 | orchestrator | 2026-03-25 05:36:11.296527 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 05:36:48.185286 | orchestrator | Wednesday 25 March 2026 05:36:11 +0000 (0:00:01.231) 0:28:28.286 ******* 2026-03-25 05:36:48.185419 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.185436 | orchestrator | 2026-03-25 05:36:48.185448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 05:36:48.185458 | orchestrator | Wednesday 25 March 2026 05:36:12 +0000 (0:00:01.160) 0:28:29.447 ******* 2026-03-25 05:36:48.185468 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.185478 | orchestrator | 2026-03-25 05:36:48.185489 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 05:36:48.185499 | orchestrator | Wednesday 25 March 2026 05:36:13 +0000 (0:00:01.134) 0:28:30.581 ******* 2026-03-25 05:36:48.185509 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.185519 | orchestrator | 2026-03-25 05:36:48.185529 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 05:36:48.185539 | orchestrator | Wednesday 25 March 2026 05:36:14 +0000 (0:00:01.197) 0:28:31.779 ******* 2026-03-25 05:36:48.185548 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.185558 | orchestrator | 2026-03-25 05:36:48.185568 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 05:36:48.185577 | orchestrator | Wednesday 25 March 2026 05:36:15 +0000 (0:00:01.141) 0:28:32.920 ******* 2026-03-25 05:36:48.185587 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.185597 | orchestrator | 2026-03-25 05:36:48.185606 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 05:36:48.185616 | orchestrator | Wednesday 25 March 2026 05:36:17 +0000 (0:00:01.176) 0:28:34.096 ******* 2026-03-25 05:36:48.185625 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.185635 | orchestrator | 2026-03-25 05:36:48.185645 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 05:36:48.185654 | orchestrator | Wednesday 25 March 2026 05:36:18 +0000 (0:00:01.198) 0:28:35.295 ******* 2026-03-25 05:36:48.185664 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.185674 | orchestrator | 2026-03-25 05:36:48.185761 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 05:36:48.185774 | orchestrator | Wednesday 25 March 2026 05:36:19 +0000 (0:00:01.167) 0:28:36.462 ******* 2026-03-25 05:36:48.185783 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.185793 | orchestrator | 2026-03-25 05:36:48.185804 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 05:36:48.185817 | orchestrator | Wednesday 25 March 2026 05:36:20 +0000 (0:00:01.138) 0:28:37.600 ******* 2026-03-25 05:36:48.185827 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:36:48.185842 | orchestrator | 2026-03-25 05:36:48.185859 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:36:48.185884 | orchestrator | Wednesday 25 March 2026 05:36:21 +0000 (0:00:00.813) 0:28:38.414 ******* 2026-03-25 05:36:48.185902 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-25 05:36:48.185919 | orchestrator | 2026-03-25 05:36:48.185936 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 05:36:48.185953 | orchestrator | Wednesday 25 March 2026 05:36:22 +0000 (0:00:01.117) 0:28:39.532 ******* 2026-03-25 05:36:48.185969 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-25 05:36:48.185986 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-25 05:36:48.186003 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-25 05:36:48.186091 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-25 05:36:48.186113 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-25 05:36:48.186130 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-25 05:36:48.186147 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-25 05:36:48.186164 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-25 05:36:48.186182 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 05:36:48.186217 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 05:36:48.186237 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 05:36:48.186253 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 05:36:48.186271 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 05:36:48.186282 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 05:36:48.186292 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-25 05:36:48.186301 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-25 05:36:48.186311 | orchestrator | 2026-03-25 05:36:48.186321 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:36:48.186330 | orchestrator | Wednesday 25 March 2026 05:36:28 +0000 (0:00:06.342) 0:28:45.874 ******* 2026-03-25 05:36:48.186340 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186349 | orchestrator | 2026-03-25 05:36:48.186359 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:36:48.186369 | orchestrator | Wednesday 25 March 2026 05:36:29 +0000 (0:00:00.791) 0:28:46.667 ******* 2026-03-25 05:36:48.186378 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186388 | orchestrator | 2026-03-25 05:36:48.186397 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:36:48.186407 | orchestrator | Wednesday 25 March 2026 05:36:30 +0000 (0:00:00.814) 0:28:47.481 ******* 2026-03-25 05:36:48.186416 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186426 | orchestrator | 2026-03-25 05:36:48.186436 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:36:48.186445 | orchestrator | Wednesday 25 March 2026 05:36:31 +0000 (0:00:00.802) 0:28:48.283 ******* 2026-03-25 05:36:48.186455 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186465 | orchestrator | 2026-03-25 05:36:48.186474 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:36:48.186517 | orchestrator | Wednesday 25 March 2026 05:36:32 +0000 (0:00:00.776) 0:28:49.059 ******* 2026-03-25 05:36:48.186528 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186538 | orchestrator | 2026-03-25 05:36:48.186547 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:36:48.186557 | orchestrator | Wednesday 25 March 2026 05:36:32 +0000 (0:00:00.797) 0:28:49.856 ******* 2026-03-25 05:36:48.186566 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186576 | orchestrator | 2026-03-25 05:36:48.186586 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:36:48.186595 | orchestrator | Wednesday 25 March 2026 05:36:33 +0000 (0:00:00.778) 0:28:50.635 ******* 2026-03-25 05:36:48.186605 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186614 | orchestrator | 2026-03-25 05:36:48.186624 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:36:48.186634 | orchestrator | Wednesday 25 March 2026 05:36:34 +0000 (0:00:00.793) 0:28:51.429 ******* 2026-03-25 05:36:48.186643 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186653 | orchestrator | 2026-03-25 05:36:48.186662 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:36:48.186672 | orchestrator | Wednesday 25 March 2026 05:36:35 +0000 (0:00:00.776) 0:28:52.206 ******* 2026-03-25 05:36:48.186682 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186717 | orchestrator | 2026-03-25 05:36:48.186728 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:36:48.186737 | orchestrator | Wednesday 25 March 2026 05:36:35 +0000 (0:00:00.806) 0:28:53.012 ******* 2026-03-25 05:36:48.186747 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186756 | orchestrator | 2026-03-25 05:36:48.186766 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:36:48.186776 | orchestrator | Wednesday 25 March 2026 05:36:36 +0000 (0:00:00.779) 0:28:53.792 ******* 2026-03-25 05:36:48.186790 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186806 | orchestrator | 2026-03-25 05:36:48.186821 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:36:48.186837 | orchestrator | Wednesday 25 March 2026 05:36:37 +0000 (0:00:00.819) 0:28:54.612 ******* 2026-03-25 05:36:48.186851 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186866 | orchestrator | 2026-03-25 05:36:48.186883 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:36:48.186898 | orchestrator | Wednesday 25 March 2026 05:36:38 +0000 (0:00:00.779) 0:28:55.391 ******* 2026-03-25 05:36:48.186914 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186931 | orchestrator | 2026-03-25 05:36:48.186948 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:36:48.186965 | orchestrator | Wednesday 25 March 2026 05:36:39 +0000 (0:00:00.986) 0:28:56.378 ******* 2026-03-25 05:36:48.186982 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.186999 | orchestrator | 2026-03-25 05:36:48.187014 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:36:48.187030 | orchestrator | Wednesday 25 March 2026 05:36:40 +0000 (0:00:00.782) 0:28:57.161 ******* 2026-03-25 05:36:48.187042 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.187051 | orchestrator | 2026-03-25 05:36:48.187061 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:36:48.187070 | orchestrator | Wednesday 25 March 2026 05:36:41 +0000 (0:00:00.971) 0:28:58.132 ******* 2026-03-25 05:36:48.187079 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.187089 | orchestrator | 2026-03-25 05:36:48.187098 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:36:48.187108 | orchestrator | Wednesday 25 March 2026 05:36:41 +0000 (0:00:00.757) 0:28:58.890 ******* 2026-03-25 05:36:48.187117 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.187137 | orchestrator | 2026-03-25 05:36:48.187146 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:36:48.187164 | orchestrator | Wednesday 25 March 2026 05:36:42 +0000 (0:00:00.779) 0:28:59.670 ******* 2026-03-25 05:36:48.187174 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.187184 | orchestrator | 2026-03-25 05:36:48.187193 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:36:48.187202 | orchestrator | Wednesday 25 March 2026 05:36:43 +0000 (0:00:00.795) 0:29:00.466 ******* 2026-03-25 05:36:48.187212 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.187221 | orchestrator | 2026-03-25 05:36:48.187231 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:36:48.187240 | orchestrator | Wednesday 25 March 2026 05:36:44 +0000 (0:00:00.811) 0:29:01.278 ******* 2026-03-25 05:36:48.187250 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.187264 | orchestrator | 2026-03-25 05:36:48.187281 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:36:48.187296 | orchestrator | Wednesday 25 March 2026 05:36:45 +0000 (0:00:00.881) 0:29:02.159 ******* 2026-03-25 05:36:48.187312 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.187328 | orchestrator | 2026-03-25 05:36:48.187342 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:36:48.187359 | orchestrator | Wednesday 25 March 2026 05:36:45 +0000 (0:00:00.811) 0:29:02.971 ******* 2026-03-25 05:36:48.187375 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 05:36:48.187391 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 05:36:48.187408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 05:36:48.187425 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:36:48.187441 | orchestrator | 2026-03-25 05:36:48.187458 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:36:48.187475 | orchestrator | Wednesday 25 March 2026 05:36:47 +0000 (0:00:01.098) 0:29:04.069 ******* 2026-03-25 05:36:48.187491 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 05:36:48.187516 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 05:37:46.412018 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 05:37:46.412122 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412133 | orchestrator | 2026-03-25 05:37:46.412141 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:37:46.412149 | orchestrator | Wednesday 25 March 2026 05:36:48 +0000 (0:00:01.120) 0:29:05.190 ******* 2026-03-25 05:37:46.412156 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-25 05:37:46.412162 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-25 05:37:46.412169 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-25 05:37:46.412202 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412210 | orchestrator | 2026-03-25 05:37:46.412217 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:37:46.412223 | orchestrator | Wednesday 25 March 2026 05:36:49 +0000 (0:00:01.106) 0:29:06.297 ******* 2026-03-25 05:37:46.412230 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412236 | orchestrator | 2026-03-25 05:37:46.412243 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:37:46.412249 | orchestrator | Wednesday 25 March 2026 05:36:50 +0000 (0:00:00.777) 0:29:07.074 ******* 2026-03-25 05:37:46.412256 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-25 05:37:46.412263 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412269 | orchestrator | 2026-03-25 05:37:46.412275 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:37:46.412281 | orchestrator | Wednesday 25 March 2026 05:36:50 +0000 (0:00:00.935) 0:29:08.010 ******* 2026-03-25 05:37:46.412308 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:37:46.412315 | orchestrator | 2026-03-25 05:37:46.412322 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:37:46.412328 | orchestrator | Wednesday 25 March 2026 05:36:52 +0000 (0:00:01.430) 0:29:09.441 ******* 2026-03-25 05:37:46.412334 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:37:46.412341 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-25 05:37:46.412347 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:37:46.412353 | orchestrator | 2026-03-25 05:37:46.412360 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-25 05:37:46.412366 | orchestrator | Wednesday 25 March 2026 05:36:54 +0000 (0:00:01.673) 0:29:11.114 ******* 2026-03-25 05:37:46.412372 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-03-25 05:37:46.412378 | orchestrator | 2026-03-25 05:37:46.412384 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-25 05:37:46.412390 | orchestrator | Wednesday 25 March 2026 05:36:55 +0000 (0:00:01.128) 0:29:12.243 ******* 2026-03-25 05:37:46.412396 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:37:46.412403 | orchestrator | 2026-03-25 05:37:46.412409 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-25 05:37:46.412415 | orchestrator | Wednesday 25 March 2026 05:36:56 +0000 (0:00:01.556) 0:29:13.800 ******* 2026-03-25 05:37:46.412421 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412427 | orchestrator | 2026-03-25 05:37:46.412433 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-25 05:37:46.412439 | orchestrator | Wednesday 25 March 2026 05:36:57 +0000 (0:00:01.149) 0:29:14.950 ******* 2026-03-25 05:37:46.412446 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:37:46.412452 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:37:46.412458 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:37:46.412464 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-03-25 05:37:46.412470 | orchestrator | 2026-03-25 05:37:46.412538 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-25 05:37:46.412550 | orchestrator | Wednesday 25 March 2026 05:37:04 +0000 (0:00:07.032) 0:29:21.982 ******* 2026-03-25 05:37:46.412559 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:37:46.412569 | orchestrator | 2026-03-25 05:37:46.412580 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-25 05:37:46.412590 | orchestrator | Wednesday 25 March 2026 05:37:06 +0000 (0:00:01.210) 0:29:23.193 ******* 2026-03-25 05:37:46.412602 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-25 05:37:46.412613 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-25 05:37:46.412624 | orchestrator | 2026-03-25 05:37:46.412632 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-25 05:37:46.412639 | orchestrator | Wednesday 25 March 2026 05:37:09 +0000 (0:00:03.214) 0:29:26.408 ******* 2026-03-25 05:37:46.412646 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-25 05:37:46.412653 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-25 05:37:46.412660 | orchestrator | 2026-03-25 05:37:46.412666 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-25 05:37:46.412673 | orchestrator | Wednesday 25 March 2026 05:37:11 +0000 (0:00:02.036) 0:29:28.444 ******* 2026-03-25 05:37:46.412680 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:37:46.412687 | orchestrator | 2026-03-25 05:37:46.412694 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-25 05:37:46.412701 | orchestrator | Wednesday 25 March 2026 05:37:12 +0000 (0:00:01.535) 0:29:29.979 ******* 2026-03-25 05:37:46.412708 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412715 | orchestrator | 2026-03-25 05:37:46.412728 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-25 05:37:46.412735 | orchestrator | Wednesday 25 March 2026 05:37:13 +0000 (0:00:00.821) 0:29:30.801 ******* 2026-03-25 05:37:46.412742 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412749 | orchestrator | 2026-03-25 05:37:46.412756 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-25 05:37:46.412775 | orchestrator | Wednesday 25 March 2026 05:37:14 +0000 (0:00:00.819) 0:29:31.620 ******* 2026-03-25 05:37:46.412782 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-03-25 05:37:46.412788 | orchestrator | 2026-03-25 05:37:46.412795 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-25 05:37:46.412801 | orchestrator | Wednesday 25 March 2026 05:37:15 +0000 (0:00:01.141) 0:29:32.762 ******* 2026-03-25 05:37:46.412807 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412814 | orchestrator | 2026-03-25 05:37:46.412820 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-25 05:37:46.412826 | orchestrator | Wednesday 25 March 2026 05:37:16 +0000 (0:00:01.144) 0:29:33.907 ******* 2026-03-25 05:37:46.412832 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412838 | orchestrator | 2026-03-25 05:37:46.412844 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-25 05:37:46.412850 | orchestrator | Wednesday 25 March 2026 05:37:18 +0000 (0:00:01.207) 0:29:35.115 ******* 2026-03-25 05:37:46.412857 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-03-25 05:37:46.412863 | orchestrator | 2026-03-25 05:37:46.412869 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-25 05:37:46.412875 | orchestrator | Wednesday 25 March 2026 05:37:19 +0000 (0:00:01.321) 0:29:36.437 ******* 2026-03-25 05:37:46.412881 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:37:46.412887 | orchestrator | 2026-03-25 05:37:46.412893 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-25 05:37:46.412900 | orchestrator | Wednesday 25 March 2026 05:37:21 +0000 (0:00:02.093) 0:29:38.530 ******* 2026-03-25 05:37:46.412906 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:37:46.412912 | orchestrator | 2026-03-25 05:37:46.412918 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-25 05:37:46.412924 | orchestrator | Wednesday 25 March 2026 05:37:23 +0000 (0:00:02.019) 0:29:40.549 ******* 2026-03-25 05:37:46.412930 | orchestrator | ok: [testbed-node-1] 2026-03-25 05:37:46.412936 | orchestrator | 2026-03-25 05:37:46.412942 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-25 05:37:46.412948 | orchestrator | Wednesday 25 March 2026 05:37:26 +0000 (0:00:02.483) 0:29:43.033 ******* 2026-03-25 05:37:46.412954 | orchestrator | changed: [testbed-node-1] 2026-03-25 05:37:46.412960 | orchestrator | 2026-03-25 05:37:46.412967 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-25 05:37:46.412973 | orchestrator | Wednesday 25 March 2026 05:37:29 +0000 (0:00:03.669) 0:29:46.702 ******* 2026-03-25 05:37:46.412979 | orchestrator | skipping: [testbed-node-1] 2026-03-25 05:37:46.412985 | orchestrator | 2026-03-25 05:37:46.412991 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-25 05:37:46.412997 | orchestrator | 2026-03-25 05:37:46.413004 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-25 05:37:46.413010 | orchestrator | Wednesday 25 March 2026 05:37:30 +0000 (0:00:01.056) 0:29:47.759 ******* 2026-03-25 05:37:46.413016 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:37:46.413022 | orchestrator | 2026-03-25 05:37:46.413028 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-25 05:37:46.413034 | orchestrator | Wednesday 25 March 2026 05:37:33 +0000 (0:00:02.467) 0:29:50.227 ******* 2026-03-25 05:37:46.413040 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:37:46.413046 | orchestrator | 2026-03-25 05:37:46.413052 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:37:46.413063 | orchestrator | Wednesday 25 March 2026 05:37:35 +0000 (0:00:02.143) 0:29:52.371 ******* 2026-03-25 05:37:46.413069 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-25 05:37:46.413075 | orchestrator | 2026-03-25 05:37:46.413081 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:37:46.413087 | orchestrator | Wednesday 25 March 2026 05:37:36 +0000 (0:00:01.101) 0:29:53.472 ******* 2026-03-25 05:37:46.413097 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:37:46.413103 | orchestrator | 2026-03-25 05:37:46.413110 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:37:46.413116 | orchestrator | Wednesday 25 March 2026 05:37:37 +0000 (0:00:01.447) 0:29:54.919 ******* 2026-03-25 05:37:46.413122 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:37:46.413128 | orchestrator | 2026-03-25 05:37:46.413134 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:37:46.413140 | orchestrator | Wednesday 25 March 2026 05:37:39 +0000 (0:00:01.165) 0:29:56.085 ******* 2026-03-25 05:37:46.413146 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:37:46.413152 | orchestrator | 2026-03-25 05:37:46.413158 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:37:46.413164 | orchestrator | Wednesday 25 March 2026 05:37:40 +0000 (0:00:01.559) 0:29:57.644 ******* 2026-03-25 05:37:46.413171 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:37:46.413177 | orchestrator | 2026-03-25 05:37:46.413183 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:37:46.413189 | orchestrator | Wednesday 25 March 2026 05:37:41 +0000 (0:00:01.160) 0:29:58.805 ******* 2026-03-25 05:37:46.413195 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:37:46.413201 | orchestrator | 2026-03-25 05:37:46.413207 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:37:46.413214 | orchestrator | Wednesday 25 March 2026 05:37:42 +0000 (0:00:01.133) 0:29:59.938 ******* 2026-03-25 05:37:46.413220 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:37:46.413226 | orchestrator | 2026-03-25 05:37:46.413232 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:37:46.413238 | orchestrator | Wednesday 25 March 2026 05:37:44 +0000 (0:00:01.170) 0:30:01.109 ******* 2026-03-25 05:37:46.413244 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:37:46.413250 | orchestrator | 2026-03-25 05:37:46.413256 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:37:46.413262 | orchestrator | Wednesday 25 March 2026 05:37:45 +0000 (0:00:01.173) 0:30:02.283 ******* 2026-03-25 05:37:46.413268 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:37:46.413275 | orchestrator | 2026-03-25 05:37:46.413284 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:38:12.010735 | orchestrator | Wednesday 25 March 2026 05:37:46 +0000 (0:00:01.129) 0:30:03.413 ******* 2026-03-25 05:38:12.010855 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:38:12.010871 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:38:12.010883 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:38:12.010895 | orchestrator | 2026-03-25 05:38:12.010907 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:38:12.010918 | orchestrator | Wednesday 25 March 2026 05:37:48 +0000 (0:00:02.000) 0:30:05.414 ******* 2026-03-25 05:38:12.010929 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:38:12.010940 | orchestrator | 2026-03-25 05:38:12.010951 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:38:12.010962 | orchestrator | Wednesday 25 March 2026 05:37:49 +0000 (0:00:01.288) 0:30:06.703 ******* 2026-03-25 05:38:12.010973 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:38:12.010983 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:38:12.011020 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:38:12.011031 | orchestrator | 2026-03-25 05:38:12.011048 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:38:12.011067 | orchestrator | Wednesday 25 March 2026 05:37:52 +0000 (0:00:03.286) 0:30:09.989 ******* 2026-03-25 05:38:12.011079 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 05:38:12.011090 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 05:38:12.011100 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 05:38:12.011111 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:12.011122 | orchestrator | 2026-03-25 05:38:12.011133 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:38:12.011143 | orchestrator | Wednesday 25 March 2026 05:37:54 +0000 (0:00:01.750) 0:30:11.740 ******* 2026-03-25 05:38:12.011155 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:38:12.011169 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:38:12.011181 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:38:12.011192 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:12.011203 | orchestrator | 2026-03-25 05:38:12.011214 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:38:12.011224 | orchestrator | Wednesday 25 March 2026 05:37:56 +0000 (0:00:02.090) 0:30:13.830 ******* 2026-03-25 05:38:12.011262 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:12.011279 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:12.011294 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:12.011307 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:12.011320 | orchestrator | 2026-03-25 05:38:12.011333 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:38:12.011346 | orchestrator | Wednesday 25 March 2026 05:37:58 +0000 (0:00:01.198) 0:30:15.029 ******* 2026-03-25 05:38:12.011379 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:37:50.203214', 'end': '2026-03-25 05:37:50.248907', 'delta': '0:00:00.045693', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:38:12.011441 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:37:51.120572', 'end': '2026-03-25 05:37:51.169955', 'delta': '0:00:00.049383', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:38:12.011462 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:37:51.677100', 'end': '2026-03-25 05:37:51.726930', 'delta': '0:00:00.049830', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:38:12.011478 | orchestrator | 2026-03-25 05:38:12.011491 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:38:12.011505 | orchestrator | Wednesday 25 March 2026 05:37:59 +0000 (0:00:01.224) 0:30:16.253 ******* 2026-03-25 05:38:12.011518 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:38:12.011530 | orchestrator | 2026-03-25 05:38:12.011543 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:38:12.011556 | orchestrator | Wednesday 25 March 2026 05:38:00 +0000 (0:00:01.253) 0:30:17.507 ******* 2026-03-25 05:38:12.011569 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:12.011582 | orchestrator | 2026-03-25 05:38:12.011595 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:38:12.011606 | orchestrator | Wednesday 25 March 2026 05:38:01 +0000 (0:00:01.253) 0:30:18.761 ******* 2026-03-25 05:38:12.011622 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:38:12.011633 | orchestrator | 2026-03-25 05:38:12.011651 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:38:12.011663 | orchestrator | Wednesday 25 March 2026 05:38:02 +0000 (0:00:01.142) 0:30:19.904 ******* 2026-03-25 05:38:12.011674 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:38:12.011685 | orchestrator | 2026-03-25 05:38:12.011696 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:38:12.011706 | orchestrator | Wednesday 25 March 2026 05:38:04 +0000 (0:00:02.070) 0:30:21.974 ******* 2026-03-25 05:38:12.011717 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:38:12.011728 | orchestrator | 2026-03-25 05:38:12.011738 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:38:12.011749 | orchestrator | Wednesday 25 March 2026 05:38:06 +0000 (0:00:01.169) 0:30:23.144 ******* 2026-03-25 05:38:12.011760 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:12.011771 | orchestrator | 2026-03-25 05:38:12.011781 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:38:12.011799 | orchestrator | Wednesday 25 March 2026 05:38:07 +0000 (0:00:01.142) 0:30:24.286 ******* 2026-03-25 05:38:12.011810 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:12.011821 | orchestrator | 2026-03-25 05:38:12.011831 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:38:12.011847 | orchestrator | Wednesday 25 March 2026 05:38:08 +0000 (0:00:01.197) 0:30:25.484 ******* 2026-03-25 05:38:12.011865 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:12.011879 | orchestrator | 2026-03-25 05:38:12.011890 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:38:12.011901 | orchestrator | Wednesday 25 March 2026 05:38:09 +0000 (0:00:01.208) 0:30:26.692 ******* 2026-03-25 05:38:12.011911 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:12.011922 | orchestrator | 2026-03-25 05:38:12.011933 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:38:12.011944 | orchestrator | Wednesday 25 March 2026 05:38:10 +0000 (0:00:01.155) 0:30:27.848 ******* 2026-03-25 05:38:12.011955 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:12.011966 | orchestrator | 2026-03-25 05:38:12.011983 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:38:19.212999 | orchestrator | Wednesday 25 March 2026 05:38:11 +0000 (0:00:01.167) 0:30:29.015 ******* 2026-03-25 05:38:19.213109 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:19.213127 | orchestrator | 2026-03-25 05:38:19.213140 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:38:19.213151 | orchestrator | Wednesday 25 March 2026 05:38:13 +0000 (0:00:01.172) 0:30:30.187 ******* 2026-03-25 05:38:19.213162 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:19.213173 | orchestrator | 2026-03-25 05:38:19.213184 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:38:19.213195 | orchestrator | Wednesday 25 March 2026 05:38:14 +0000 (0:00:01.176) 0:30:31.364 ******* 2026-03-25 05:38:19.213206 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:19.213216 | orchestrator | 2026-03-25 05:38:19.213227 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:38:19.213238 | orchestrator | Wednesday 25 March 2026 05:38:15 +0000 (0:00:01.209) 0:30:32.573 ******* 2026-03-25 05:38:19.213249 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:19.213259 | orchestrator | 2026-03-25 05:38:19.213270 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:38:19.213281 | orchestrator | Wednesday 25 March 2026 05:38:16 +0000 (0:00:01.146) 0:30:33.720 ******* 2026-03-25 05:38:19.213295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:38:19.213310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:38:19.213321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:38:19.213352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:38:19.213417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:38:19.213431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:38:19.213442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:38:19.213479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46c5fc1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:38:19.213494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:38:19.213522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:38:19.213536 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:19.213548 | orchestrator | 2026-03-25 05:38:19.213560 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:38:19.213573 | orchestrator | Wednesday 25 March 2026 05:38:17 +0000 (0:00:01.218) 0:30:34.939 ******* 2026-03-25 05:38:19.213586 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:19.213611 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:26.966499 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:26.966704 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:26.966728 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:26.966783 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:26.966796 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:26.966833 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46c5fc1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_46c5fc1c-53d3-41e9-9a97-2ae0d3d9eeb2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:26.966860 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:26.966877 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:38:26.966890 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:26.966903 | orchestrator | 2026-03-25 05:38:26.966915 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:38:26.966928 | orchestrator | Wednesday 25 March 2026 05:38:19 +0000 (0:00:01.283) 0:30:36.223 ******* 2026-03-25 05:38:26.966938 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:38:26.966950 | orchestrator | 2026-03-25 05:38:26.966961 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:38:26.966972 | orchestrator | Wednesday 25 March 2026 05:38:20 +0000 (0:00:01.497) 0:30:37.721 ******* 2026-03-25 05:38:26.966982 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:38:26.966993 | orchestrator | 2026-03-25 05:38:26.967003 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:38:26.967014 | orchestrator | Wednesday 25 March 2026 05:38:21 +0000 (0:00:01.159) 0:30:38.881 ******* 2026-03-25 05:38:26.967025 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:38:26.967037 | orchestrator | 2026-03-25 05:38:26.967049 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:38:26.967061 | orchestrator | Wednesday 25 March 2026 05:38:23 +0000 (0:00:01.496) 0:30:40.377 ******* 2026-03-25 05:38:26.967073 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:26.967086 | orchestrator | 2026-03-25 05:38:26.967097 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:38:26.967109 | orchestrator | Wednesday 25 March 2026 05:38:24 +0000 (0:00:01.154) 0:30:41.531 ******* 2026-03-25 05:38:26.967122 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:26.967133 | orchestrator | 2026-03-25 05:38:26.967146 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:38:26.967159 | orchestrator | Wednesday 25 March 2026 05:38:25 +0000 (0:00:01.280) 0:30:42.812 ******* 2026-03-25 05:38:26.967170 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:38:26.967183 | orchestrator | 2026-03-25 05:38:26.967196 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:38:26.967214 | orchestrator | Wednesday 25 March 2026 05:38:26 +0000 (0:00:01.163) 0:30:43.976 ******* 2026-03-25 05:39:04.721678 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-25 05:39:04.721793 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-25 05:39:04.721809 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:39:04.721821 | orchestrator | 2026-03-25 05:39:04.721834 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:39:04.721847 | orchestrator | Wednesday 25 March 2026 05:38:29 +0000 (0:00:02.089) 0:30:46.065 ******* 2026-03-25 05:39:04.721858 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-25 05:39:04.721892 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-25 05:39:04.721903 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-25 05:39:04.721914 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.721925 | orchestrator | 2026-03-25 05:39:04.721936 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:39:04.721947 | orchestrator | Wednesday 25 March 2026 05:38:30 +0000 (0:00:01.138) 0:30:47.204 ******* 2026-03-25 05:39:04.721958 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.721969 | orchestrator | 2026-03-25 05:39:04.721980 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:39:04.721991 | orchestrator | Wednesday 25 March 2026 05:38:31 +0000 (0:00:01.163) 0:30:48.368 ******* 2026-03-25 05:39:04.722001 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:39:04.722013 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:39:04.722089 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:39:04.722101 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:39:04.722111 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:39:04.722122 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:39:04.722164 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:39:04.722176 | orchestrator | 2026-03-25 05:39:04.722186 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:39:04.722197 | orchestrator | Wednesday 25 March 2026 05:38:33 +0000 (0:00:02.199) 0:30:50.568 ******* 2026-03-25 05:39:04.722208 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:39:04.722250 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:39:04.722264 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:39:04.722276 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:39:04.722288 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:39:04.722301 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:39:04.722313 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:39:04.722325 | orchestrator | 2026-03-25 05:39:04.722352 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:39:04.722365 | orchestrator | Wednesday 25 March 2026 05:38:35 +0000 (0:00:02.217) 0:30:52.786 ******* 2026-03-25 05:39:04.722377 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-25 05:39:04.722390 | orchestrator | 2026-03-25 05:39:04.722403 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 05:39:04.722415 | orchestrator | Wednesday 25 March 2026 05:38:36 +0000 (0:00:01.103) 0:30:53.889 ******* 2026-03-25 05:39:04.722427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-25 05:39:04.722440 | orchestrator | 2026-03-25 05:39:04.722453 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 05:39:04.722465 | orchestrator | Wednesday 25 March 2026 05:38:38 +0000 (0:00:01.157) 0:30:55.047 ******* 2026-03-25 05:39:04.722477 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:04.722490 | orchestrator | 2026-03-25 05:39:04.722503 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 05:39:04.722513 | orchestrator | Wednesday 25 March 2026 05:38:39 +0000 (0:00:01.535) 0:30:56.583 ******* 2026-03-25 05:39:04.722525 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.722545 | orchestrator | 2026-03-25 05:39:04.722556 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 05:39:04.722567 | orchestrator | Wednesday 25 March 2026 05:38:40 +0000 (0:00:01.092) 0:30:57.675 ******* 2026-03-25 05:39:04.722577 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.722588 | orchestrator | 2026-03-25 05:39:04.722599 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 05:39:04.722609 | orchestrator | Wednesday 25 March 2026 05:38:41 +0000 (0:00:01.102) 0:30:58.778 ******* 2026-03-25 05:39:04.722620 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.722631 | orchestrator | 2026-03-25 05:39:04.722642 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 05:39:04.722652 | orchestrator | Wednesday 25 March 2026 05:38:42 +0000 (0:00:01.136) 0:30:59.915 ******* 2026-03-25 05:39:04.722663 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:04.722674 | orchestrator | 2026-03-25 05:39:04.722685 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 05:39:04.722695 | orchestrator | Wednesday 25 March 2026 05:38:44 +0000 (0:00:01.552) 0:31:01.467 ******* 2026-03-25 05:39:04.722706 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.722717 | orchestrator | 2026-03-25 05:39:04.722728 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 05:39:04.722757 | orchestrator | Wednesday 25 March 2026 05:38:45 +0000 (0:00:01.245) 0:31:02.712 ******* 2026-03-25 05:39:04.722769 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.722780 | orchestrator | 2026-03-25 05:39:04.722791 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 05:39:04.722801 | orchestrator | Wednesday 25 March 2026 05:38:46 +0000 (0:00:01.179) 0:31:03.892 ******* 2026-03-25 05:39:04.722812 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:04.722823 | orchestrator | 2026-03-25 05:39:04.722833 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 05:39:04.722844 | orchestrator | Wednesday 25 March 2026 05:38:48 +0000 (0:00:01.658) 0:31:05.551 ******* 2026-03-25 05:39:04.722855 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:04.722866 | orchestrator | 2026-03-25 05:39:04.722876 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 05:39:04.722887 | orchestrator | Wednesday 25 March 2026 05:38:50 +0000 (0:00:01.531) 0:31:07.083 ******* 2026-03-25 05:39:04.722899 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.722910 | orchestrator | 2026-03-25 05:39:04.722921 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:39:04.722931 | orchestrator | Wednesday 25 March 2026 05:38:50 +0000 (0:00:00.879) 0:31:07.962 ******* 2026-03-25 05:39:04.722942 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:04.722953 | orchestrator | 2026-03-25 05:39:04.722963 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:39:04.722974 | orchestrator | Wednesday 25 March 2026 05:38:51 +0000 (0:00:00.847) 0:31:08.809 ******* 2026-03-25 05:39:04.722985 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.722995 | orchestrator | 2026-03-25 05:39:04.723006 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:39:04.723017 | orchestrator | Wednesday 25 March 2026 05:38:52 +0000 (0:00:00.759) 0:31:09.569 ******* 2026-03-25 05:39:04.723028 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723038 | orchestrator | 2026-03-25 05:39:04.723049 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:39:04.723060 | orchestrator | Wednesday 25 March 2026 05:38:53 +0000 (0:00:00.813) 0:31:10.383 ******* 2026-03-25 05:39:04.723071 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723081 | orchestrator | 2026-03-25 05:39:04.723092 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:39:04.723103 | orchestrator | Wednesday 25 March 2026 05:38:54 +0000 (0:00:00.790) 0:31:11.174 ******* 2026-03-25 05:39:04.723113 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723131 | orchestrator | 2026-03-25 05:39:04.723142 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:39:04.723152 | orchestrator | Wednesday 25 March 2026 05:38:54 +0000 (0:00:00.831) 0:31:12.006 ******* 2026-03-25 05:39:04.723163 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723174 | orchestrator | 2026-03-25 05:39:04.723185 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:39:04.723195 | orchestrator | Wednesday 25 March 2026 05:38:55 +0000 (0:00:00.779) 0:31:12.785 ******* 2026-03-25 05:39:04.723206 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:04.723234 | orchestrator | 2026-03-25 05:39:04.723245 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:39:04.723256 | orchestrator | Wednesday 25 March 2026 05:38:56 +0000 (0:00:00.816) 0:31:13.602 ******* 2026-03-25 05:39:04.723267 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:04.723277 | orchestrator | 2026-03-25 05:39:04.723294 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:39:04.723305 | orchestrator | Wednesday 25 March 2026 05:38:57 +0000 (0:00:00.790) 0:31:14.393 ******* 2026-03-25 05:39:04.723316 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:04.723326 | orchestrator | 2026-03-25 05:39:04.723337 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:39:04.723348 | orchestrator | Wednesday 25 March 2026 05:38:58 +0000 (0:00:00.967) 0:31:15.361 ******* 2026-03-25 05:39:04.723359 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723369 | orchestrator | 2026-03-25 05:39:04.723380 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:39:04.723391 | orchestrator | Wednesday 25 March 2026 05:38:59 +0000 (0:00:00.812) 0:31:16.173 ******* 2026-03-25 05:39:04.723402 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723412 | orchestrator | 2026-03-25 05:39:04.723423 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:39:04.723434 | orchestrator | Wednesday 25 March 2026 05:39:00 +0000 (0:00:00.855) 0:31:17.028 ******* 2026-03-25 05:39:04.723444 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723455 | orchestrator | 2026-03-25 05:39:04.723466 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:39:04.723477 | orchestrator | Wednesday 25 March 2026 05:39:00 +0000 (0:00:00.773) 0:31:17.802 ******* 2026-03-25 05:39:04.723488 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723498 | orchestrator | 2026-03-25 05:39:04.723509 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:39:04.723520 | orchestrator | Wednesday 25 March 2026 05:39:01 +0000 (0:00:00.792) 0:31:18.595 ******* 2026-03-25 05:39:04.723531 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723541 | orchestrator | 2026-03-25 05:39:04.723552 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:39:04.723563 | orchestrator | Wednesday 25 March 2026 05:39:02 +0000 (0:00:00.802) 0:31:19.398 ******* 2026-03-25 05:39:04.723574 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723584 | orchestrator | 2026-03-25 05:39:04.723595 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:39:04.723606 | orchestrator | Wednesday 25 March 2026 05:39:03 +0000 (0:00:00.768) 0:31:20.166 ******* 2026-03-25 05:39:04.723616 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:04.723627 | orchestrator | 2026-03-25 05:39:04.723638 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:39:04.723649 | orchestrator | Wednesday 25 March 2026 05:39:03 +0000 (0:00:00.778) 0:31:20.945 ******* 2026-03-25 05:39:04.723665 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.943997 | orchestrator | 2026-03-25 05:39:52.944150 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:39:52.944167 | orchestrator | Wednesday 25 March 2026 05:39:04 +0000 (0:00:00.782) 0:31:21.727 ******* 2026-03-25 05:39:52.944178 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944212 | orchestrator | 2026-03-25 05:39:52.944222 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:39:52.944232 | orchestrator | Wednesday 25 March 2026 05:39:05 +0000 (0:00:00.810) 0:31:22.538 ******* 2026-03-25 05:39:52.944241 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944251 | orchestrator | 2026-03-25 05:39:52.944260 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:39:52.944270 | orchestrator | Wednesday 25 March 2026 05:39:06 +0000 (0:00:00.756) 0:31:23.295 ******* 2026-03-25 05:39:52.944279 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944289 | orchestrator | 2026-03-25 05:39:52.944298 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:39:52.944308 | orchestrator | Wednesday 25 March 2026 05:39:07 +0000 (0:00:00.786) 0:31:24.082 ******* 2026-03-25 05:39:52.944319 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944328 | orchestrator | 2026-03-25 05:39:52.944338 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:39:52.944347 | orchestrator | Wednesday 25 March 2026 05:39:07 +0000 (0:00:00.891) 0:31:24.974 ******* 2026-03-25 05:39:52.944356 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:52.944367 | orchestrator | 2026-03-25 05:39:52.944376 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:39:52.944385 | orchestrator | Wednesday 25 March 2026 05:39:09 +0000 (0:00:01.696) 0:31:26.671 ******* 2026-03-25 05:39:52.944395 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:52.944404 | orchestrator | 2026-03-25 05:39:52.944414 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:39:52.944423 | orchestrator | Wednesday 25 March 2026 05:39:11 +0000 (0:00:02.005) 0:31:28.676 ******* 2026-03-25 05:39:52.944432 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-25 05:39:52.944443 | orchestrator | 2026-03-25 05:39:52.944452 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 05:39:52.944462 | orchestrator | Wednesday 25 March 2026 05:39:12 +0000 (0:00:01.133) 0:31:29.810 ******* 2026-03-25 05:39:52.944471 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944480 | orchestrator | 2026-03-25 05:39:52.944490 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 05:39:52.944499 | orchestrator | Wednesday 25 March 2026 05:39:13 +0000 (0:00:01.172) 0:31:30.982 ******* 2026-03-25 05:39:52.944508 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944518 | orchestrator | 2026-03-25 05:39:52.944527 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 05:39:52.944537 | orchestrator | Wednesday 25 March 2026 05:39:15 +0000 (0:00:01.132) 0:31:32.115 ******* 2026-03-25 05:39:52.944548 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 05:39:52.944558 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 05:39:52.944570 | orchestrator | 2026-03-25 05:39:52.944581 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 05:39:52.944603 | orchestrator | Wednesday 25 March 2026 05:39:16 +0000 (0:00:01.820) 0:31:33.936 ******* 2026-03-25 05:39:52.944615 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:52.944626 | orchestrator | 2026-03-25 05:39:52.944636 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 05:39:52.944646 | orchestrator | Wednesday 25 March 2026 05:39:18 +0000 (0:00:01.486) 0:31:35.422 ******* 2026-03-25 05:39:52.944657 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944667 | orchestrator | 2026-03-25 05:39:52.944678 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 05:39:52.944689 | orchestrator | Wednesday 25 March 2026 05:39:19 +0000 (0:00:01.186) 0:31:36.609 ******* 2026-03-25 05:39:52.944700 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944710 | orchestrator | 2026-03-25 05:39:52.944721 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:39:52.944741 | orchestrator | Wednesday 25 March 2026 05:39:20 +0000 (0:00:00.853) 0:31:37.462 ******* 2026-03-25 05:39:52.944751 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944762 | orchestrator | 2026-03-25 05:39:52.944772 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:39:52.944784 | orchestrator | Wednesday 25 March 2026 05:39:21 +0000 (0:00:00.782) 0:31:38.245 ******* 2026-03-25 05:39:52.944794 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-25 05:39:52.944805 | orchestrator | 2026-03-25 05:39:52.944816 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 05:39:52.944827 | orchestrator | Wednesday 25 March 2026 05:39:22 +0000 (0:00:01.126) 0:31:39.371 ******* 2026-03-25 05:39:52.944837 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:52.944848 | orchestrator | 2026-03-25 05:39:52.944859 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 05:39:52.944870 | orchestrator | Wednesday 25 March 2026 05:39:24 +0000 (0:00:01.792) 0:31:41.163 ******* 2026-03-25 05:39:52.944881 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 05:39:52.944891 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 05:39:52.944902 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 05:39:52.944911 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944921 | orchestrator | 2026-03-25 05:39:52.944930 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 05:39:52.944940 | orchestrator | Wednesday 25 March 2026 05:39:25 +0000 (0:00:01.205) 0:31:42.369 ******* 2026-03-25 05:39:52.944964 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.944974 | orchestrator | 2026-03-25 05:39:52.944983 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 05:39:52.944993 | orchestrator | Wednesday 25 March 2026 05:39:26 +0000 (0:00:01.173) 0:31:43.543 ******* 2026-03-25 05:39:52.945002 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945012 | orchestrator | 2026-03-25 05:39:52.945021 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 05:39:52.945030 | orchestrator | Wednesday 25 March 2026 05:39:27 +0000 (0:00:01.162) 0:31:44.706 ******* 2026-03-25 05:39:52.945040 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945049 | orchestrator | 2026-03-25 05:39:52.945058 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 05:39:52.945068 | orchestrator | Wednesday 25 March 2026 05:39:28 +0000 (0:00:01.143) 0:31:45.850 ******* 2026-03-25 05:39:52.945093 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945102 | orchestrator | 2026-03-25 05:39:52.945112 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 05:39:52.945121 | orchestrator | Wednesday 25 March 2026 05:39:30 +0000 (0:00:01.210) 0:31:47.061 ******* 2026-03-25 05:39:52.945131 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945140 | orchestrator | 2026-03-25 05:39:52.945150 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:39:52.945159 | orchestrator | Wednesday 25 March 2026 05:39:30 +0000 (0:00:00.805) 0:31:47.866 ******* 2026-03-25 05:39:52.945168 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:52.945178 | orchestrator | 2026-03-25 05:39:52.945187 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:39:52.945196 | orchestrator | Wednesday 25 March 2026 05:39:33 +0000 (0:00:02.237) 0:31:50.104 ******* 2026-03-25 05:39:52.945206 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:52.945215 | orchestrator | 2026-03-25 05:39:52.945225 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:39:52.945234 | orchestrator | Wednesday 25 March 2026 05:39:33 +0000 (0:00:00.767) 0:31:50.872 ******* 2026-03-25 05:39:52.945244 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-25 05:39:52.945260 | orchestrator | 2026-03-25 05:39:52.945270 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 05:39:52.945279 | orchestrator | Wednesday 25 March 2026 05:39:34 +0000 (0:00:01.129) 0:31:52.001 ******* 2026-03-25 05:39:52.945288 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945298 | orchestrator | 2026-03-25 05:39:52.945307 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 05:39:52.945317 | orchestrator | Wednesday 25 March 2026 05:39:36 +0000 (0:00:01.209) 0:31:53.211 ******* 2026-03-25 05:39:52.945326 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945336 | orchestrator | 2026-03-25 05:39:52.945345 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 05:39:52.945355 | orchestrator | Wednesday 25 March 2026 05:39:37 +0000 (0:00:01.162) 0:31:54.373 ******* 2026-03-25 05:39:52.945364 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945373 | orchestrator | 2026-03-25 05:39:52.945383 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 05:39:52.945392 | orchestrator | Wednesday 25 March 2026 05:39:38 +0000 (0:00:01.171) 0:31:55.544 ******* 2026-03-25 05:39:52.945402 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945411 | orchestrator | 2026-03-25 05:39:52.945425 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 05:39:52.945435 | orchestrator | Wednesday 25 March 2026 05:39:39 +0000 (0:00:01.172) 0:31:56.717 ******* 2026-03-25 05:39:52.945445 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945454 | orchestrator | 2026-03-25 05:39:52.945463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 05:39:52.945472 | orchestrator | Wednesday 25 March 2026 05:39:40 +0000 (0:00:01.151) 0:31:57.868 ******* 2026-03-25 05:39:52.945482 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945491 | orchestrator | 2026-03-25 05:39:52.945501 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 05:39:52.945510 | orchestrator | Wednesday 25 March 2026 05:39:42 +0000 (0:00:01.151) 0:31:59.020 ******* 2026-03-25 05:39:52.945520 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945529 | orchestrator | 2026-03-25 05:39:52.945538 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 05:39:52.945548 | orchestrator | Wednesday 25 March 2026 05:39:43 +0000 (0:00:01.146) 0:32:00.166 ******* 2026-03-25 05:39:52.945557 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:39:52.945566 | orchestrator | 2026-03-25 05:39:52.945576 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 05:39:52.945585 | orchestrator | Wednesday 25 March 2026 05:39:44 +0000 (0:00:01.200) 0:32:01.367 ******* 2026-03-25 05:39:52.945595 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:39:52.945604 | orchestrator | 2026-03-25 05:39:52.945614 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:39:52.945623 | orchestrator | Wednesday 25 March 2026 05:39:45 +0000 (0:00:00.872) 0:32:02.239 ******* 2026-03-25 05:39:52.945632 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-25 05:39:52.945642 | orchestrator | 2026-03-25 05:39:52.945651 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 05:39:52.945661 | orchestrator | Wednesday 25 March 2026 05:39:46 +0000 (0:00:01.185) 0:32:03.425 ******* 2026-03-25 05:39:52.945670 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-25 05:39:52.945680 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-25 05:39:52.945690 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-25 05:39:52.945699 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-25 05:39:52.945708 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-25 05:39:52.945718 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-25 05:39:52.945739 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-25 05:40:28.773914 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-25 05:40:28.774137 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 05:40:28.774157 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 05:40:28.774200 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 05:40:28.774213 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 05:40:28.774224 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 05:40:28.774235 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 05:40:28.774247 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-25 05:40:28.774258 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-25 05:40:28.774269 | orchestrator | 2026-03-25 05:40:28.774282 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:40:28.774293 | orchestrator | Wednesday 25 March 2026 05:39:52 +0000 (0:00:06.512) 0:32:09.937 ******* 2026-03-25 05:40:28.774304 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774315 | orchestrator | 2026-03-25 05:40:28.774326 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:40:28.774337 | orchestrator | Wednesday 25 March 2026 05:39:53 +0000 (0:00:00.777) 0:32:10.714 ******* 2026-03-25 05:40:28.774348 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774359 | orchestrator | 2026-03-25 05:40:28.774370 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:40:28.774380 | orchestrator | Wednesday 25 March 2026 05:39:54 +0000 (0:00:00.779) 0:32:11.494 ******* 2026-03-25 05:40:28.774391 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774402 | orchestrator | 2026-03-25 05:40:28.774413 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:40:28.774424 | orchestrator | Wednesday 25 March 2026 05:39:55 +0000 (0:00:00.833) 0:32:12.328 ******* 2026-03-25 05:40:28.774435 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774446 | orchestrator | 2026-03-25 05:40:28.774458 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:40:28.774471 | orchestrator | Wednesday 25 March 2026 05:39:56 +0000 (0:00:00.773) 0:32:13.102 ******* 2026-03-25 05:40:28.774483 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774496 | orchestrator | 2026-03-25 05:40:28.774509 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:40:28.774522 | orchestrator | Wednesday 25 March 2026 05:39:56 +0000 (0:00:00.824) 0:32:13.926 ******* 2026-03-25 05:40:28.774535 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774547 | orchestrator | 2026-03-25 05:40:28.774559 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:40:28.774573 | orchestrator | Wednesday 25 March 2026 05:39:57 +0000 (0:00:00.758) 0:32:14.684 ******* 2026-03-25 05:40:28.774585 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774597 | orchestrator | 2026-03-25 05:40:28.774609 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:40:28.774621 | orchestrator | Wednesday 25 March 2026 05:39:58 +0000 (0:00:00.778) 0:32:15.463 ******* 2026-03-25 05:40:28.774634 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774646 | orchestrator | 2026-03-25 05:40:28.774674 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:40:28.774688 | orchestrator | Wednesday 25 March 2026 05:39:59 +0000 (0:00:00.819) 0:32:16.282 ******* 2026-03-25 05:40:28.774700 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774713 | orchestrator | 2026-03-25 05:40:28.774724 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:40:28.774737 | orchestrator | Wednesday 25 March 2026 05:40:00 +0000 (0:00:00.790) 0:32:17.073 ******* 2026-03-25 05:40:28.774774 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774786 | orchestrator | 2026-03-25 05:40:28.774799 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:40:28.774811 | orchestrator | Wednesday 25 March 2026 05:40:00 +0000 (0:00:00.846) 0:32:17.919 ******* 2026-03-25 05:40:28.774823 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774834 | orchestrator | 2026-03-25 05:40:28.774845 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:40:28.774856 | orchestrator | Wednesday 25 March 2026 05:40:01 +0000 (0:00:00.774) 0:32:18.694 ******* 2026-03-25 05:40:28.774866 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774877 | orchestrator | 2026-03-25 05:40:28.774888 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:40:28.774898 | orchestrator | Wednesday 25 March 2026 05:40:02 +0000 (0:00:00.815) 0:32:19.509 ******* 2026-03-25 05:40:28.774909 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774920 | orchestrator | 2026-03-25 05:40:28.774931 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:40:28.774941 | orchestrator | Wednesday 25 March 2026 05:40:03 +0000 (0:00:00.883) 0:32:20.393 ******* 2026-03-25 05:40:28.774952 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.774963 | orchestrator | 2026-03-25 05:40:28.774998 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:40:28.775009 | orchestrator | Wednesday 25 March 2026 05:40:04 +0000 (0:00:00.854) 0:32:21.248 ******* 2026-03-25 05:40:28.775020 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775031 | orchestrator | 2026-03-25 05:40:28.775041 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:40:28.775052 | orchestrator | Wednesday 25 March 2026 05:40:05 +0000 (0:00:00.918) 0:32:22.166 ******* 2026-03-25 05:40:28.775063 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775074 | orchestrator | 2026-03-25 05:40:28.775084 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:40:28.775095 | orchestrator | Wednesday 25 March 2026 05:40:06 +0000 (0:00:00.851) 0:32:23.017 ******* 2026-03-25 05:40:28.775124 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775136 | orchestrator | 2026-03-25 05:40:28.775147 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:40:28.775159 | orchestrator | Wednesday 25 March 2026 05:40:06 +0000 (0:00:00.756) 0:32:23.774 ******* 2026-03-25 05:40:28.775170 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775181 | orchestrator | 2026-03-25 05:40:28.775192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:40:28.775202 | orchestrator | Wednesday 25 March 2026 05:40:07 +0000 (0:00:00.752) 0:32:24.527 ******* 2026-03-25 05:40:28.775213 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775224 | orchestrator | 2026-03-25 05:40:28.775234 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:40:28.775246 | orchestrator | Wednesday 25 March 2026 05:40:08 +0000 (0:00:00.794) 0:32:25.321 ******* 2026-03-25 05:40:28.775256 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775267 | orchestrator | 2026-03-25 05:40:28.775278 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:40:28.775288 | orchestrator | Wednesday 25 March 2026 05:40:09 +0000 (0:00:00.751) 0:32:26.074 ******* 2026-03-25 05:40:28.775299 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775310 | orchestrator | 2026-03-25 05:40:28.775320 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:40:28.775331 | orchestrator | Wednesday 25 March 2026 05:40:09 +0000 (0:00:00.755) 0:32:26.829 ******* 2026-03-25 05:40:28.775342 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 05:40:28.775353 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 05:40:28.775371 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 05:40:28.775382 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775393 | orchestrator | 2026-03-25 05:40:28.775568 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:40:28.775586 | orchestrator | Wednesday 25 March 2026 05:40:10 +0000 (0:00:01.034) 0:32:27.864 ******* 2026-03-25 05:40:28.775597 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 05:40:28.775608 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 05:40:28.775618 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 05:40:28.775629 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775640 | orchestrator | 2026-03-25 05:40:28.775651 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:40:28.775662 | orchestrator | Wednesday 25 March 2026 05:40:11 +0000 (0:00:01.011) 0:32:28.876 ******* 2026-03-25 05:40:28.775673 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-25 05:40:28.775683 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-25 05:40:28.775694 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-25 05:40:28.775705 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775716 | orchestrator | 2026-03-25 05:40:28.775727 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:40:28.775738 | orchestrator | Wednesday 25 March 2026 05:40:12 +0000 (0:00:01.100) 0:32:29.976 ******* 2026-03-25 05:40:28.775749 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775760 | orchestrator | 2026-03-25 05:40:28.775778 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:40:28.775789 | orchestrator | Wednesday 25 March 2026 05:40:13 +0000 (0:00:00.833) 0:32:30.809 ******* 2026-03-25 05:40:28.775800 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-25 05:40:28.775811 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.775821 | orchestrator | 2026-03-25 05:40:28.775832 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:40:28.775843 | orchestrator | Wednesday 25 March 2026 05:40:14 +0000 (0:00:00.938) 0:32:31.747 ******* 2026-03-25 05:40:28.775854 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:40:28.775865 | orchestrator | 2026-03-25 05:40:28.775876 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:40:28.775886 | orchestrator | Wednesday 25 March 2026 05:40:16 +0000 (0:00:01.443) 0:32:33.191 ******* 2026-03-25 05:40:28.775897 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:40:28.775910 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:40:28.775920 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-25 05:40:28.775931 | orchestrator | 2026-03-25 05:40:28.775942 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-25 05:40:28.775953 | orchestrator | Wednesday 25 March 2026 05:40:17 +0000 (0:00:01.713) 0:32:34.905 ******* 2026-03-25 05:40:28.775963 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-03-25 05:40:28.776008 | orchestrator | 2026-03-25 05:40:28.776019 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-25 05:40:28.776030 | orchestrator | Wednesday 25 March 2026 05:40:18 +0000 (0:00:01.094) 0:32:36.000 ******* 2026-03-25 05:40:28.776041 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:40:28.776051 | orchestrator | 2026-03-25 05:40:28.776062 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-25 05:40:28.776073 | orchestrator | Wednesday 25 March 2026 05:40:20 +0000 (0:00:01.519) 0:32:37.519 ******* 2026-03-25 05:40:28.776084 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:40:28.776095 | orchestrator | 2026-03-25 05:40:28.776105 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-25 05:40:28.776124 | orchestrator | Wednesday 25 March 2026 05:40:21 +0000 (0:00:01.132) 0:32:38.651 ******* 2026-03-25 05:40:28.776135 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:40:28.776146 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:40:28.776166 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:41:16.667149 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-03-25 05:41:16.667266 | orchestrator | 2026-03-25 05:41:16.667282 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-25 05:41:16.667294 | orchestrator | Wednesday 25 March 2026 05:40:28 +0000 (0:00:07.122) 0:32:45.774 ******* 2026-03-25 05:41:16.667305 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:41:16.667318 | orchestrator | 2026-03-25 05:41:16.667328 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-25 05:41:16.667339 | orchestrator | Wednesday 25 March 2026 05:40:30 +0000 (0:00:01.247) 0:32:47.022 ******* 2026-03-25 05:41:16.667350 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-25 05:41:16.667361 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-25 05:41:16.667372 | orchestrator | 2026-03-25 05:41:16.667383 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-25 05:41:16.667393 | orchestrator | Wednesday 25 March 2026 05:40:33 +0000 (0:00:03.412) 0:32:50.435 ******* 2026-03-25 05:41:16.667404 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-25 05:41:16.667415 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-25 05:41:16.667426 | orchestrator | 2026-03-25 05:41:16.667437 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-25 05:41:16.667448 | orchestrator | Wednesday 25 March 2026 05:40:35 +0000 (0:00:02.046) 0:32:52.481 ******* 2026-03-25 05:41:16.667458 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:41:16.667469 | orchestrator | 2026-03-25 05:41:16.667479 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-25 05:41:16.667490 | orchestrator | Wednesday 25 March 2026 05:40:36 +0000 (0:00:01.509) 0:32:53.991 ******* 2026-03-25 05:41:16.667501 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:41:16.667512 | orchestrator | 2026-03-25 05:41:16.667522 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-25 05:41:16.667533 | orchestrator | Wednesday 25 March 2026 05:40:37 +0000 (0:00:00.838) 0:32:54.829 ******* 2026-03-25 05:41:16.667543 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:41:16.667554 | orchestrator | 2026-03-25 05:41:16.667565 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-25 05:41:16.667575 | orchestrator | Wednesday 25 March 2026 05:40:38 +0000 (0:00:00.770) 0:32:55.600 ******* 2026-03-25 05:41:16.667586 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-03-25 05:41:16.667597 | orchestrator | 2026-03-25 05:41:16.667607 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-25 05:41:16.667618 | orchestrator | Wednesday 25 March 2026 05:40:39 +0000 (0:00:01.247) 0:32:56.848 ******* 2026-03-25 05:41:16.667628 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:41:16.667639 | orchestrator | 2026-03-25 05:41:16.667650 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-25 05:41:16.667661 | orchestrator | Wednesday 25 March 2026 05:40:40 +0000 (0:00:01.117) 0:32:57.965 ******* 2026-03-25 05:41:16.667671 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:41:16.667682 | orchestrator | 2026-03-25 05:41:16.667693 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-25 05:41:16.667703 | orchestrator | Wednesday 25 March 2026 05:40:42 +0000 (0:00:01.191) 0:32:59.157 ******* 2026-03-25 05:41:16.667731 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-03-25 05:41:16.667743 | orchestrator | 2026-03-25 05:41:16.667753 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-25 05:41:16.667794 | orchestrator | Wednesday 25 March 2026 05:40:43 +0000 (0:00:01.156) 0:33:00.314 ******* 2026-03-25 05:41:16.667816 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:41:16.667836 | orchestrator | 2026-03-25 05:41:16.667888 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-25 05:41:16.667901 | orchestrator | Wednesday 25 March 2026 05:40:45 +0000 (0:00:02.085) 0:33:02.399 ******* 2026-03-25 05:41:16.667912 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:41:16.667922 | orchestrator | 2026-03-25 05:41:16.667933 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-25 05:41:16.667943 | orchestrator | Wednesday 25 March 2026 05:40:47 +0000 (0:00:01.959) 0:33:04.359 ******* 2026-03-25 05:41:16.667954 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:41:16.667965 | orchestrator | 2026-03-25 05:41:16.667975 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-25 05:41:16.667986 | orchestrator | Wednesday 25 March 2026 05:40:49 +0000 (0:00:02.444) 0:33:06.803 ******* 2026-03-25 05:41:16.667996 | orchestrator | changed: [testbed-node-2] 2026-03-25 05:41:16.668007 | orchestrator | 2026-03-25 05:41:16.668017 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-25 05:41:16.668028 | orchestrator | Wednesday 25 March 2026 05:40:53 +0000 (0:00:03.558) 0:33:10.361 ******* 2026-03-25 05:41:16.668038 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-25 05:41:16.668049 | orchestrator | 2026-03-25 05:41:16.668059 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-25 05:41:16.668070 | orchestrator | Wednesday 25 March 2026 05:40:54 +0000 (0:00:01.557) 0:33:11.919 ******* 2026-03-25 05:41:16.668080 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:41:16.668091 | orchestrator | 2026-03-25 05:41:16.668101 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-25 05:41:16.668112 | orchestrator | Wednesday 25 March 2026 05:40:57 +0000 (0:00:02.508) 0:33:14.427 ******* 2026-03-25 05:41:16.668123 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:41:16.668133 | orchestrator | 2026-03-25 05:41:16.668144 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-25 05:41:16.668154 | orchestrator | Wednesday 25 March 2026 05:41:00 +0000 (0:00:02.739) 0:33:17.166 ******* 2026-03-25 05:41:16.668165 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:41:16.668175 | orchestrator | 2026-03-25 05:41:16.668186 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-25 05:41:16.668213 | orchestrator | Wednesday 25 March 2026 05:41:02 +0000 (0:00:02.138) 0:33:19.305 ******* 2026-03-25 05:41:16.668225 | orchestrator | ok: [testbed-node-2] 2026-03-25 05:41:16.668235 | orchestrator | 2026-03-25 05:41:16.668246 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-25 05:41:16.668257 | orchestrator | Wednesday 25 March 2026 05:41:03 +0000 (0:00:01.220) 0:33:20.526 ******* 2026-03-25 05:41:16.668267 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-25 05:41:16.668278 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-25 05:41:16.668289 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:41:16.668299 | orchestrator | 2026-03-25 05:41:16.668310 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-25 05:41:16.668320 | orchestrator | Wednesday 25 March 2026 05:41:04 +0000 (0:00:01.375) 0:33:21.901 ******* 2026-03-25 05:41:16.668331 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-25 05:41:16.668342 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-25 05:41:16.668352 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-25 05:41:16.668362 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-25 05:41:16.668373 | orchestrator | skipping: [testbed-node-2] 2026-03-25 05:41:16.668383 | orchestrator | 2026-03-25 05:41:16.668394 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-03-25 05:41:16.668414 | orchestrator | 2026-03-25 05:41:16.668425 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:41:16.668436 | orchestrator | Wednesday 25 March 2026 05:41:06 +0000 (0:00:01.965) 0:33:23.866 ******* 2026-03-25 05:41:16.668446 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:41:16.668457 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:41:16.668468 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:41:16.668479 | orchestrator | 2026-03-25 05:41:16.668490 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:41:16.668500 | orchestrator | Wednesday 25 March 2026 05:41:08 +0000 (0:00:01.648) 0:33:25.515 ******* 2026-03-25 05:41:16.668510 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:41:16.668521 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:41:16.668532 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:41:16.668542 | orchestrator | 2026-03-25 05:41:16.668553 | orchestrator | TASK [Get pool list] *********************************************************** 2026-03-25 05:41:16.668563 | orchestrator | Wednesday 25 March 2026 05:41:10 +0000 (0:00:01.806) 0:33:27.322 ******* 2026-03-25 05:41:16.668574 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:41:16.668585 | orchestrator | 2026-03-25 05:41:16.668595 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-03-25 05:41:16.668606 | orchestrator | Wednesday 25 March 2026 05:41:13 +0000 (0:00:02.898) 0:33:30.220 ******* 2026-03-25 05:41:16.668616 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:41:16.668627 | orchestrator | 2026-03-25 05:41:16.668637 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-03-25 05:41:16.668648 | orchestrator | Wednesday 25 March 2026 05:41:16 +0000 (0:00:02.867) 0:33:33.088 ******* 2026-03-25 05:41:16.668673 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-03-25T03:02:45.055250+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:16.668701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-03-25T03:03:58.613381+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '33', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:17.435686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-03-25T03:04:02.412777+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:17.435794 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-03-25T03:05:02.570399+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:17.435834 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-03-25T03:05:08.221732+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:17.435846 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-03-25T03:05:14.565344+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:17.435937 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-03-25T03:05:20.622312+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '194', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:18.981360 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-03-25T03:05:25.719160+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '77', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:18.981499 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-03-25T03:05:37.242892+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '77', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:18.981540 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-03-25T03:06:27.317326+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '113', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 113, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:18.981568 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-03-25T03:06:36.133212+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '121', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 121, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:41:18.981592 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-03-25T03:06:45.069589+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '204', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 204, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:42:53.008371 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-03-25T03:06:53.983587+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '138', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 138, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:42:53.008530 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-03-25T03:07:03.183000+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '147', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 147, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-25 05:42:53.008563 | orchestrator | 2026-03-25 05:42:53.008587 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-03-25 05:42:53.008597 | orchestrator | Wednesday 25 March 2026 05:41:18 +0000 (0:00:02.903) 0:33:35.992 ******* 2026-03-25 05:42:53.008605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:42:53.008613 | orchestrator | 2026-03-25 05:42:53.008620 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-03-25 05:42:53.008627 | orchestrator | Wednesday 25 March 2026 05:41:21 +0000 (0:00:02.969) 0:33:38.961 ******* 2026-03-25 05:42:53.008689 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-25 05:42:53.008701 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-25 05:42:53.008709 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-25 05:42:53.008716 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-25 05:42:53.008725 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-25 05:42:53.008733 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-25 05:42:53.008740 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-25 05:42:53.008747 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-25 05:42:53.008754 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-25 05:42:53.008761 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-25 05:42:53.008768 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-25 05:42:53.008782 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-25 05:42:53.008789 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-25 05:42:53.008796 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-25 05:42:53.008804 | orchestrator | 2026-03-25 05:42:53.008811 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-03-25 05:42:53.008823 | orchestrator | Wednesday 25 March 2026 05:42:36 +0000 (0:01:14.418) 0:34:53.379 ******* 2026-03-25 05:42:53.008835 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-25 05:42:53.008842 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-25 05:42:53.008849 | orchestrator | 2026-03-25 05:42:53.008856 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-25 05:42:53.008864 | orchestrator | 2026-03-25 05:42:53.008871 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:42:53.008878 | orchestrator | Wednesday 25 March 2026 05:42:41 +0000 (0:00:05.510) 0:34:58.890 ******* 2026-03-25 05:42:53.008885 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-25 05:42:53.008892 | orchestrator | 2026-03-25 05:42:53.008899 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:42:53.008908 | orchestrator | Wednesday 25 March 2026 05:42:43 +0000 (0:00:01.308) 0:35:00.199 ******* 2026-03-25 05:42:53.008917 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:42:53.008925 | orchestrator | 2026-03-25 05:42:53.008933 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:42:53.008941 | orchestrator | Wednesday 25 March 2026 05:42:44 +0000 (0:00:01.453) 0:35:01.652 ******* 2026-03-25 05:42:53.008949 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:42:53.008958 | orchestrator | 2026-03-25 05:42:53.008966 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:42:53.008975 | orchestrator | Wednesday 25 March 2026 05:42:45 +0000 (0:00:01.167) 0:35:02.820 ******* 2026-03-25 05:42:53.008983 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:42:53.008991 | orchestrator | 2026-03-25 05:42:53.009000 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:42:53.009008 | orchestrator | Wednesday 25 March 2026 05:42:47 +0000 (0:00:01.468) 0:35:04.288 ******* 2026-03-25 05:42:53.009017 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:42:53.009025 | orchestrator | 2026-03-25 05:42:53.009033 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:42:53.009041 | orchestrator | Wednesday 25 March 2026 05:42:48 +0000 (0:00:01.121) 0:35:05.410 ******* 2026-03-25 05:42:53.009050 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:42:53.009058 | orchestrator | 2026-03-25 05:42:53.009067 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:42:53.009074 | orchestrator | Wednesday 25 March 2026 05:42:49 +0000 (0:00:01.131) 0:35:06.541 ******* 2026-03-25 05:42:53.009081 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:42:53.009088 | orchestrator | 2026-03-25 05:42:53.009096 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:42:53.009103 | orchestrator | Wednesday 25 March 2026 05:42:50 +0000 (0:00:01.135) 0:35:07.677 ******* 2026-03-25 05:42:53.009110 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:42:53.009117 | orchestrator | 2026-03-25 05:42:53.009125 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:42:53.009132 | orchestrator | Wednesday 25 March 2026 05:42:51 +0000 (0:00:01.176) 0:35:08.854 ******* 2026-03-25 05:42:53.009139 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:42:53.009146 | orchestrator | 2026-03-25 05:42:53.009159 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:43:19.288798 | orchestrator | Wednesday 25 March 2026 05:42:52 +0000 (0:00:01.155) 0:35:10.010 ******* 2026-03-25 05:43:19.288939 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:43:19.288956 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:43:19.288983 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:43:19.288995 | orchestrator | 2026-03-25 05:43:19.289007 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:43:19.289018 | orchestrator | Wednesday 25 March 2026 05:42:55 +0000 (0:00:02.091) 0:35:12.101 ******* 2026-03-25 05:43:19.289029 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:43:19.289041 | orchestrator | 2026-03-25 05:43:19.289052 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:43:19.289063 | orchestrator | Wednesday 25 March 2026 05:42:56 +0000 (0:00:01.504) 0:35:13.605 ******* 2026-03-25 05:43:19.289074 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:43:19.289084 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:43:19.289095 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:43:19.289106 | orchestrator | 2026-03-25 05:43:19.289117 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:43:19.289129 | orchestrator | Wednesday 25 March 2026 05:42:59 +0000 (0:00:03.211) 0:35:16.816 ******* 2026-03-25 05:43:19.289140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 05:43:19.289152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 05:43:19.289162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 05:43:19.289174 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:19.289185 | orchestrator | 2026-03-25 05:43:19.289195 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:43:19.289206 | orchestrator | Wednesday 25 March 2026 05:43:01 +0000 (0:00:01.901) 0:35:18.717 ******* 2026-03-25 05:43:19.289219 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:43:19.289233 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:43:19.289244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:43:19.289255 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:19.289266 | orchestrator | 2026-03-25 05:43:19.289277 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:43:19.289288 | orchestrator | Wednesday 25 March 2026 05:43:03 +0000 (0:00:02.171) 0:35:20.889 ******* 2026-03-25 05:43:19.289302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:19.289315 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:19.289336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:19.289350 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:19.289362 | orchestrator | 2026-03-25 05:43:19.289375 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:43:19.289388 | orchestrator | Wednesday 25 March 2026 05:43:05 +0000 (0:00:01.256) 0:35:22.146 ******* 2026-03-25 05:43:19.289426 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:42:57.142253', 'end': '2026-03-25 05:42:57.179657', 'delta': '0:00:00.037404', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:43:19.289443 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:42:58.023255', 'end': '2026-03-25 05:42:58.075508', 'delta': '0:00:00.052253', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:43:19.289456 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:42:58.604501', 'end': '2026-03-25 05:42:58.635028', 'delta': '0:00:00.030527', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:43:19.289469 | orchestrator | 2026-03-25 05:43:19.289481 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:43:19.289492 | orchestrator | Wednesday 25 March 2026 05:43:06 +0000 (0:00:01.201) 0:35:23.347 ******* 2026-03-25 05:43:19.289503 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:43:19.289514 | orchestrator | 2026-03-25 05:43:19.289525 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:43:19.289536 | orchestrator | Wednesday 25 March 2026 05:43:07 +0000 (0:00:01.258) 0:35:24.605 ******* 2026-03-25 05:43:19.289546 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:19.289557 | orchestrator | 2026-03-25 05:43:19.289578 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:43:19.289623 | orchestrator | Wednesday 25 March 2026 05:43:08 +0000 (0:00:01.241) 0:35:25.846 ******* 2026-03-25 05:43:19.289653 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:43:19.289673 | orchestrator | 2026-03-25 05:43:19.289693 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:43:19.289712 | orchestrator | Wednesday 25 March 2026 05:43:10 +0000 (0:00:01.175) 0:35:27.022 ******* 2026-03-25 05:43:19.289731 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:43:19.289742 | orchestrator | 2026-03-25 05:43:19.289753 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:43:19.289764 | orchestrator | Wednesday 25 March 2026 05:43:12 +0000 (0:00:02.089) 0:35:29.111 ******* 2026-03-25 05:43:19.289774 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:43:19.289785 | orchestrator | 2026-03-25 05:43:19.289796 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:43:19.289806 | orchestrator | Wednesday 25 March 2026 05:43:13 +0000 (0:00:01.201) 0:35:30.312 ******* 2026-03-25 05:43:19.289817 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:19.289828 | orchestrator | 2026-03-25 05:43:19.289838 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:43:19.289849 | orchestrator | Wednesday 25 March 2026 05:43:14 +0000 (0:00:01.146) 0:35:31.459 ******* 2026-03-25 05:43:19.289860 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:19.289871 | orchestrator | 2026-03-25 05:43:19.289882 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:43:19.289892 | orchestrator | Wednesday 25 March 2026 05:43:15 +0000 (0:00:01.373) 0:35:32.832 ******* 2026-03-25 05:43:19.289903 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:19.289914 | orchestrator | 2026-03-25 05:43:19.289924 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:43:19.289935 | orchestrator | Wednesday 25 March 2026 05:43:16 +0000 (0:00:01.133) 0:35:33.966 ******* 2026-03-25 05:43:19.289946 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:19.289956 | orchestrator | 2026-03-25 05:43:19.289967 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:43:19.289978 | orchestrator | Wednesday 25 March 2026 05:43:18 +0000 (0:00:01.158) 0:35:35.125 ******* 2026-03-25 05:43:19.289997 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:43:24.214249 | orchestrator | 2026-03-25 05:43:24.214369 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:43:24.214391 | orchestrator | Wednesday 25 March 2026 05:43:19 +0000 (0:00:01.166) 0:35:36.291 ******* 2026-03-25 05:43:24.214408 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:24.214426 | orchestrator | 2026-03-25 05:43:24.214461 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:43:24.214477 | orchestrator | Wednesday 25 March 2026 05:43:20 +0000 (0:00:01.182) 0:35:37.473 ******* 2026-03-25 05:43:24.214491 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:43:24.214507 | orchestrator | 2026-03-25 05:43:24.214522 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:43:24.214538 | orchestrator | Wednesday 25 March 2026 05:43:21 +0000 (0:00:01.177) 0:35:38.651 ******* 2026-03-25 05:43:24.214554 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:24.214563 | orchestrator | 2026-03-25 05:43:24.214572 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:43:24.214625 | orchestrator | Wednesday 25 March 2026 05:43:22 +0000 (0:00:01.144) 0:35:39.795 ******* 2026-03-25 05:43:24.214636 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:43:24.214644 | orchestrator | 2026-03-25 05:43:24.214653 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:43:24.214662 | orchestrator | Wednesday 25 March 2026 05:43:23 +0000 (0:00:01.165) 0:35:40.961 ******* 2026-03-25 05:43:24.214674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:43:24.214713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'uuids': ['a582f89c-a8ac-4a87-8a0b-f7c0ca2abef4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8']}})  2026-03-25 05:43:24.214726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99e65ea9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:43:24.214737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f']}})  2026-03-25 05:43:24.214747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:43:24.214780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:43:24.214792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-42-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:43:24.214804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:43:24.214820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo', 'dm-uuid-CRYPT-LUKS2-10d41a0c964d43008e142cbf5f4d58c4-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:43:24.214831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:43:24.214841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'uuids': ['10d41a0c-964d-4300-8e14-2cbf5f4d58c4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo']}})  2026-03-25 05:43:24.214852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e']}})  2026-03-25 05:43:24.214869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:43:25.667016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5418d243', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:43:25.667180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:43:25.667212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:43:25.667234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8', 'dm-uuid-CRYPT-LUKS2-a582f89ca8ac4a878a0bf7c0ca2abef4-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:43:25.667256 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:43:25.667276 | orchestrator | 2026-03-25 05:43:25.667295 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:43:25.667315 | orchestrator | Wednesday 25 March 2026 05:43:25 +0000 (0:00:01.484) 0:35:42.446 ******* 2026-03-25 05:43:25.667370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:25.667393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'uuids': ['a582f89c-a8ac-4a87-8a0b-f7c0ca2abef4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:25.667427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99e65ea9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:25.667449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:25.667470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:25.667509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.864694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-42-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.864827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.864845 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo', 'dm-uuid-CRYPT-LUKS2-10d41a0c964d43008e142cbf5f4d58c4-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.864858 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.864870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'uuids': ['10d41a0c-964d-4300-8e14-2cbf5f4d58c4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.864918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.864945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.864958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5418d243', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.864977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:43:26.865005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:44:05.414706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8', 'dm-uuid-CRYPT-LUKS2-a582f89ca8ac4a878a0bf7c0ca2abef4-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:44:05.414834 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.414852 | orchestrator | 2026-03-25 05:44:05.414867 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:44:05.414881 | orchestrator | Wednesday 25 March 2026 05:43:26 +0000 (0:00:01.423) 0:35:43.870 ******* 2026-03-25 05:44:05.414893 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:05.414906 | orchestrator | 2026-03-25 05:44:05.414920 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:44:05.414933 | orchestrator | Wednesday 25 March 2026 05:43:28 +0000 (0:00:01.595) 0:35:45.465 ******* 2026-03-25 05:44:05.414945 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:05.414958 | orchestrator | 2026-03-25 05:44:05.414971 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:44:05.414984 | orchestrator | Wednesday 25 March 2026 05:43:29 +0000 (0:00:01.219) 0:35:46.684 ******* 2026-03-25 05:44:05.414998 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:05.415011 | orchestrator | 2026-03-25 05:44:05.415025 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:44:05.415038 | orchestrator | Wednesday 25 March 2026 05:43:31 +0000 (0:00:01.485) 0:35:48.170 ******* 2026-03-25 05:44:05.415052 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415065 | orchestrator | 2026-03-25 05:44:05.415078 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:44:05.415092 | orchestrator | Wednesday 25 March 2026 05:43:32 +0000 (0:00:01.141) 0:35:49.312 ******* 2026-03-25 05:44:05.415105 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415119 | orchestrator | 2026-03-25 05:44:05.415132 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:44:05.415145 | orchestrator | Wednesday 25 March 2026 05:43:33 +0000 (0:00:01.273) 0:35:50.585 ******* 2026-03-25 05:44:05.415159 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415171 | orchestrator | 2026-03-25 05:44:05.415184 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:44:05.415198 | orchestrator | Wednesday 25 March 2026 05:43:34 +0000 (0:00:01.153) 0:35:51.739 ******* 2026-03-25 05:44:05.415212 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-25 05:44:05.415225 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-25 05:44:05.415238 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-25 05:44:05.415250 | orchestrator | 2026-03-25 05:44:05.415264 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:44:05.415308 | orchestrator | Wednesday 25 March 2026 05:43:36 +0000 (0:00:02.075) 0:35:53.814 ******* 2026-03-25 05:44:05.415322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 05:44:05.415337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 05:44:05.415350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 05:44:05.415364 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415376 | orchestrator | 2026-03-25 05:44:05.415390 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:44:05.415403 | orchestrator | Wednesday 25 March 2026 05:43:38 +0000 (0:00:01.243) 0:35:55.058 ******* 2026-03-25 05:44:05.415416 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-25 05:44:05.415429 | orchestrator | 2026-03-25 05:44:05.415442 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:44:05.415457 | orchestrator | Wednesday 25 March 2026 05:43:39 +0000 (0:00:01.155) 0:35:56.213 ******* 2026-03-25 05:44:05.415471 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415485 | orchestrator | 2026-03-25 05:44:05.415499 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:44:05.415539 | orchestrator | Wednesday 25 March 2026 05:43:40 +0000 (0:00:01.178) 0:35:57.392 ******* 2026-03-25 05:44:05.415551 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415563 | orchestrator | 2026-03-25 05:44:05.415593 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:44:05.415607 | orchestrator | Wednesday 25 March 2026 05:43:41 +0000 (0:00:01.198) 0:35:58.591 ******* 2026-03-25 05:44:05.415619 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415631 | orchestrator | 2026-03-25 05:44:05.415643 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:44:05.415653 | orchestrator | Wednesday 25 March 2026 05:43:42 +0000 (0:00:01.163) 0:35:59.754 ******* 2026-03-25 05:44:05.415660 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:05.415668 | orchestrator | 2026-03-25 05:44:05.415675 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:44:05.415682 | orchestrator | Wednesday 25 March 2026 05:43:43 +0000 (0:00:01.254) 0:36:01.009 ******* 2026-03-25 05:44:05.415689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 05:44:05.415717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 05:44:05.415724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 05:44:05.415732 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415739 | orchestrator | 2026-03-25 05:44:05.415746 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:44:05.415753 | orchestrator | Wednesday 25 March 2026 05:43:45 +0000 (0:00:01.415) 0:36:02.424 ******* 2026-03-25 05:44:05.415761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 05:44:05.415768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 05:44:05.415775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 05:44:05.415782 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415789 | orchestrator | 2026-03-25 05:44:05.415796 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:44:05.415803 | orchestrator | Wednesday 25 March 2026 05:43:46 +0000 (0:00:01.410) 0:36:03.834 ******* 2026-03-25 05:44:05.415811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 05:44:05.415818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 05:44:05.415825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 05:44:05.415832 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:05.415839 | orchestrator | 2026-03-25 05:44:05.415846 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:44:05.415854 | orchestrator | Wednesday 25 March 2026 05:43:48 +0000 (0:00:01.442) 0:36:05.277 ******* 2026-03-25 05:44:05.415871 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:05.415878 | orchestrator | 2026-03-25 05:44:05.415885 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:44:05.415893 | orchestrator | Wednesday 25 March 2026 05:43:49 +0000 (0:00:01.166) 0:36:06.443 ******* 2026-03-25 05:44:05.415900 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 05:44:05.415907 | orchestrator | 2026-03-25 05:44:05.415914 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:44:05.415922 | orchestrator | Wednesday 25 March 2026 05:43:50 +0000 (0:00:01.331) 0:36:07.774 ******* 2026-03-25 05:44:05.415929 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:44:05.415936 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:44:05.415943 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:44:05.415950 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 05:44:05.415957 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:44:05.415965 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:44:05.415972 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:44:05.415979 | orchestrator | 2026-03-25 05:44:05.415986 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:44:05.415993 | orchestrator | Wednesday 25 March 2026 05:43:53 +0000 (0:00:02.289) 0:36:10.064 ******* 2026-03-25 05:44:05.416001 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:44:05.416008 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:44:05.416015 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:44:05.416022 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 05:44:05.416029 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:44:05.416036 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:44:05.416043 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:44:05.416050 | orchestrator | 2026-03-25 05:44:05.416058 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-25 05:44:05.416065 | orchestrator | Wednesday 25 March 2026 05:43:56 +0000 (0:00:03.078) 0:36:13.143 ******* 2026-03-25 05:44:05.416072 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:05.416079 | orchestrator | 2026-03-25 05:44:05.416086 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-25 05:44:05.416094 | orchestrator | Wednesday 25 March 2026 05:43:57 +0000 (0:00:01.484) 0:36:14.627 ******* 2026-03-25 05:44:05.416101 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:05.416108 | orchestrator | 2026-03-25 05:44:05.416115 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-25 05:44:05.416122 | orchestrator | Wednesday 25 March 2026 05:43:58 +0000 (0:00:01.158) 0:36:15.786 ******* 2026-03-25 05:44:05.416129 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:05.416136 | orchestrator | 2026-03-25 05:44:05.416148 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-25 05:44:05.416155 | orchestrator | Wednesday 25 March 2026 05:44:00 +0000 (0:00:01.311) 0:36:17.097 ******* 2026-03-25 05:44:05.416163 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-25 05:44:05.416170 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-25 05:44:05.416177 | orchestrator | 2026-03-25 05:44:05.416184 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:44:05.416191 | orchestrator | Wednesday 25 March 2026 05:44:04 +0000 (0:00:04.144) 0:36:21.242 ******* 2026-03-25 05:44:05.416203 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-25 05:44:05.416211 | orchestrator | 2026-03-25 05:44:05.416218 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 05:44:05.416230 | orchestrator | Wednesday 25 March 2026 05:44:05 +0000 (0:00:01.175) 0:36:22.417 ******* 2026-03-25 05:44:57.037285 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-25 05:44:57.037402 | orchestrator | 2026-03-25 05:44:57.037418 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 05:44:57.037480 | orchestrator | Wednesday 25 March 2026 05:44:06 +0000 (0:00:01.229) 0:36:23.646 ******* 2026-03-25 05:44:57.037492 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.037505 | orchestrator | 2026-03-25 05:44:57.037517 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 05:44:57.037528 | orchestrator | Wednesday 25 March 2026 05:44:07 +0000 (0:00:01.172) 0:36:24.819 ******* 2026-03-25 05:44:57.037539 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.037551 | orchestrator | 2026-03-25 05:44:57.037562 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 05:44:57.037572 | orchestrator | Wednesday 25 March 2026 05:44:09 +0000 (0:00:01.524) 0:36:26.344 ******* 2026-03-25 05:44:57.037583 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.037594 | orchestrator | 2026-03-25 05:44:57.037604 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 05:44:57.037615 | orchestrator | Wednesday 25 March 2026 05:44:10 +0000 (0:00:01.531) 0:36:27.875 ******* 2026-03-25 05:44:57.037626 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.037637 | orchestrator | 2026-03-25 05:44:57.037647 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 05:44:57.037658 | orchestrator | Wednesday 25 March 2026 05:44:12 +0000 (0:00:01.512) 0:36:29.388 ******* 2026-03-25 05:44:57.037669 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.037680 | orchestrator | 2026-03-25 05:44:57.037690 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 05:44:57.037701 | orchestrator | Wednesday 25 March 2026 05:44:13 +0000 (0:00:01.223) 0:36:30.611 ******* 2026-03-25 05:44:57.037712 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.037723 | orchestrator | 2026-03-25 05:44:57.037733 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 05:44:57.037744 | orchestrator | Wednesday 25 March 2026 05:44:14 +0000 (0:00:01.130) 0:36:31.741 ******* 2026-03-25 05:44:57.037755 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.037765 | orchestrator | 2026-03-25 05:44:57.037776 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 05:44:57.037786 | orchestrator | Wednesday 25 March 2026 05:44:15 +0000 (0:00:01.138) 0:36:32.880 ******* 2026-03-25 05:44:57.037797 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.037808 | orchestrator | 2026-03-25 05:44:57.037819 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 05:44:57.037832 | orchestrator | Wednesday 25 March 2026 05:44:17 +0000 (0:00:01.570) 0:36:34.451 ******* 2026-03-25 05:44:57.037845 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.037857 | orchestrator | 2026-03-25 05:44:57.037868 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 05:44:57.037881 | orchestrator | Wednesday 25 March 2026 05:44:19 +0000 (0:00:01.614) 0:36:36.066 ******* 2026-03-25 05:44:57.037894 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.037907 | orchestrator | 2026-03-25 05:44:57.037919 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:44:57.037932 | orchestrator | Wednesday 25 March 2026 05:44:20 +0000 (0:00:01.147) 0:36:37.213 ******* 2026-03-25 05:44:57.037944 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.037957 | orchestrator | 2026-03-25 05:44:57.037969 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:44:57.038004 | orchestrator | Wednesday 25 March 2026 05:44:21 +0000 (0:00:01.177) 0:36:38.390 ******* 2026-03-25 05:44:57.038083 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.038096 | orchestrator | 2026-03-25 05:44:57.038109 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:44:57.038121 | orchestrator | Wednesday 25 March 2026 05:44:22 +0000 (0:00:01.194) 0:36:39.585 ******* 2026-03-25 05:44:57.038134 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.038146 | orchestrator | 2026-03-25 05:44:57.038192 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:44:57.038203 | orchestrator | Wednesday 25 March 2026 05:44:23 +0000 (0:00:01.179) 0:36:40.764 ******* 2026-03-25 05:44:57.038214 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.038225 | orchestrator | 2026-03-25 05:44:57.038235 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:44:57.038246 | orchestrator | Wednesday 25 March 2026 05:44:24 +0000 (0:00:01.192) 0:36:41.957 ******* 2026-03-25 05:44:57.038256 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038267 | orchestrator | 2026-03-25 05:44:57.038278 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:44:57.038288 | orchestrator | Wednesday 25 March 2026 05:44:26 +0000 (0:00:01.186) 0:36:43.144 ******* 2026-03-25 05:44:57.038299 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038310 | orchestrator | 2026-03-25 05:44:57.038320 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:44:57.038331 | orchestrator | Wednesday 25 March 2026 05:44:27 +0000 (0:00:01.169) 0:36:44.314 ******* 2026-03-25 05:44:57.038356 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038367 | orchestrator | 2026-03-25 05:44:57.038378 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:44:57.038388 | orchestrator | Wednesday 25 March 2026 05:44:28 +0000 (0:00:01.134) 0:36:45.448 ******* 2026-03-25 05:44:57.038399 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.038410 | orchestrator | 2026-03-25 05:44:57.038420 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:44:57.038450 | orchestrator | Wednesday 25 March 2026 05:44:29 +0000 (0:00:01.210) 0:36:46.658 ******* 2026-03-25 05:44:57.038461 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.038471 | orchestrator | 2026-03-25 05:44:57.038482 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:44:57.038493 | orchestrator | Wednesday 25 March 2026 05:44:30 +0000 (0:00:01.179) 0:36:47.838 ******* 2026-03-25 05:44:57.038503 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038514 | orchestrator | 2026-03-25 05:44:57.038545 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:44:57.038557 | orchestrator | Wednesday 25 March 2026 05:44:31 +0000 (0:00:01.160) 0:36:48.999 ******* 2026-03-25 05:44:57.038568 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038579 | orchestrator | 2026-03-25 05:44:57.038590 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:44:57.038601 | orchestrator | Wednesday 25 March 2026 05:44:33 +0000 (0:00:01.177) 0:36:50.177 ******* 2026-03-25 05:44:57.038611 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038622 | orchestrator | 2026-03-25 05:44:57.038633 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:44:57.038643 | orchestrator | Wednesday 25 March 2026 05:44:34 +0000 (0:00:01.117) 0:36:51.295 ******* 2026-03-25 05:44:57.038654 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038665 | orchestrator | 2026-03-25 05:44:57.038676 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:44:57.038686 | orchestrator | Wednesday 25 March 2026 05:44:35 +0000 (0:00:01.179) 0:36:52.474 ******* 2026-03-25 05:44:57.038697 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038708 | orchestrator | 2026-03-25 05:44:57.038718 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:44:57.038739 | orchestrator | Wednesday 25 March 2026 05:44:36 +0000 (0:00:01.121) 0:36:53.596 ******* 2026-03-25 05:44:57.038750 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038761 | orchestrator | 2026-03-25 05:44:57.038772 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:44:57.038782 | orchestrator | Wednesday 25 March 2026 05:44:37 +0000 (0:00:01.119) 0:36:54.716 ******* 2026-03-25 05:44:57.038793 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038804 | orchestrator | 2026-03-25 05:44:57.038815 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:44:57.038827 | orchestrator | Wednesday 25 March 2026 05:44:38 +0000 (0:00:01.150) 0:36:55.867 ******* 2026-03-25 05:44:57.038837 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038848 | orchestrator | 2026-03-25 05:44:57.038859 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:44:57.038870 | orchestrator | Wednesday 25 March 2026 05:44:40 +0000 (0:00:01.340) 0:36:57.207 ******* 2026-03-25 05:44:57.038880 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038891 | orchestrator | 2026-03-25 05:44:57.038902 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:44:57.038913 | orchestrator | Wednesday 25 March 2026 05:44:41 +0000 (0:00:01.181) 0:36:58.389 ******* 2026-03-25 05:44:57.038923 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038934 | orchestrator | 2026-03-25 05:44:57.038945 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:44:57.038956 | orchestrator | Wednesday 25 March 2026 05:44:42 +0000 (0:00:01.151) 0:36:59.541 ******* 2026-03-25 05:44:57.038967 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.038977 | orchestrator | 2026-03-25 05:44:57.038988 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:44:57.038999 | orchestrator | Wednesday 25 March 2026 05:44:43 +0000 (0:00:01.193) 0:37:00.734 ******* 2026-03-25 05:44:57.039010 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.039020 | orchestrator | 2026-03-25 05:44:57.039031 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:44:57.039042 | orchestrator | Wednesday 25 March 2026 05:44:44 +0000 (0:00:01.213) 0:37:01.948 ******* 2026-03-25 05:44:57.039052 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.039063 | orchestrator | 2026-03-25 05:44:57.039074 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:44:57.039085 | orchestrator | Wednesday 25 March 2026 05:44:46 +0000 (0:00:01.952) 0:37:03.901 ******* 2026-03-25 05:44:57.039096 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.039106 | orchestrator | 2026-03-25 05:44:57.039117 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:44:57.039128 | orchestrator | Wednesday 25 March 2026 05:44:49 +0000 (0:00:02.310) 0:37:06.211 ******* 2026-03-25 05:44:57.039139 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-25 05:44:57.039150 | orchestrator | 2026-03-25 05:44:57.039161 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 05:44:57.039171 | orchestrator | Wednesday 25 March 2026 05:44:50 +0000 (0:00:01.175) 0:37:07.387 ******* 2026-03-25 05:44:57.039182 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.039193 | orchestrator | 2026-03-25 05:44:57.039204 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 05:44:57.039214 | orchestrator | Wednesday 25 March 2026 05:44:51 +0000 (0:00:01.143) 0:37:08.530 ******* 2026-03-25 05:44:57.039225 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.039236 | orchestrator | 2026-03-25 05:44:57.039247 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 05:44:57.039257 | orchestrator | Wednesday 25 March 2026 05:44:52 +0000 (0:00:01.130) 0:37:09.661 ******* 2026-03-25 05:44:57.039268 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 05:44:57.039286 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 05:44:57.039297 | orchestrator | 2026-03-25 05:44:57.039308 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 05:44:57.039319 | orchestrator | Wednesday 25 March 2026 05:44:54 +0000 (0:00:01.780) 0:37:11.441 ******* 2026-03-25 05:44:57.039330 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:44:57.039341 | orchestrator | 2026-03-25 05:44:57.039352 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 05:44:57.039362 | orchestrator | Wednesday 25 March 2026 05:44:55 +0000 (0:00:01.477) 0:37:12.919 ******* 2026-03-25 05:44:57.039373 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:44:57.039384 | orchestrator | 2026-03-25 05:44:57.039395 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 05:44:57.039411 | orchestrator | Wednesday 25 March 2026 05:44:57 +0000 (0:00:01.116) 0:37:14.035 ******* 2026-03-25 05:45:44.754633 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.754759 | orchestrator | 2026-03-25 05:45:44.754777 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:45:44.754832 | orchestrator | Wednesday 25 March 2026 05:44:58 +0000 (0:00:01.156) 0:37:15.192 ******* 2026-03-25 05:45:44.754852 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.754864 | orchestrator | 2026-03-25 05:45:44.754877 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:45:44.754889 | orchestrator | Wednesday 25 March 2026 05:44:59 +0000 (0:00:01.295) 0:37:16.488 ******* 2026-03-25 05:45:44.754901 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-25 05:45:44.754913 | orchestrator | 2026-03-25 05:45:44.754924 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 05:45:44.754938 | orchestrator | Wednesday 25 March 2026 05:45:00 +0000 (0:00:01.140) 0:37:17.628 ******* 2026-03-25 05:45:44.754957 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:45:44.754975 | orchestrator | 2026-03-25 05:45:44.754991 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 05:45:44.755040 | orchestrator | Wednesday 25 March 2026 05:45:02 +0000 (0:00:01.755) 0:37:19.384 ******* 2026-03-25 05:45:44.755058 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 05:45:44.755077 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 05:45:44.755098 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 05:45:44.755110 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755121 | orchestrator | 2026-03-25 05:45:44.755132 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 05:45:44.755142 | orchestrator | Wednesday 25 March 2026 05:45:03 +0000 (0:00:01.180) 0:37:20.564 ******* 2026-03-25 05:45:44.755153 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755164 | orchestrator | 2026-03-25 05:45:44.755175 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 05:45:44.755187 | orchestrator | Wednesday 25 March 2026 05:45:04 +0000 (0:00:01.158) 0:37:21.722 ******* 2026-03-25 05:45:44.755200 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755213 | orchestrator | 2026-03-25 05:45:44.755226 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 05:45:44.755238 | orchestrator | Wednesday 25 March 2026 05:45:05 +0000 (0:00:01.196) 0:37:22.919 ******* 2026-03-25 05:45:44.755251 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755263 | orchestrator | 2026-03-25 05:45:44.755275 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 05:45:44.755287 | orchestrator | Wednesday 25 March 2026 05:45:07 +0000 (0:00:01.188) 0:37:24.108 ******* 2026-03-25 05:45:44.755300 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755312 | orchestrator | 2026-03-25 05:45:44.755347 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 05:45:44.755400 | orchestrator | Wednesday 25 March 2026 05:45:08 +0000 (0:00:01.153) 0:37:25.261 ******* 2026-03-25 05:45:44.755413 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755424 | orchestrator | 2026-03-25 05:45:44.755435 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:45:44.755445 | orchestrator | Wednesday 25 March 2026 05:45:09 +0000 (0:00:01.125) 0:37:26.387 ******* 2026-03-25 05:45:44.755456 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:45:44.755467 | orchestrator | 2026-03-25 05:45:44.755478 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:45:44.755488 | orchestrator | Wednesday 25 March 2026 05:45:11 +0000 (0:00:02.549) 0:37:28.937 ******* 2026-03-25 05:45:44.755499 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:45:44.755510 | orchestrator | 2026-03-25 05:45:44.755521 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:45:44.755531 | orchestrator | Wednesday 25 March 2026 05:45:13 +0000 (0:00:01.166) 0:37:30.104 ******* 2026-03-25 05:45:44.755542 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-25 05:45:44.755553 | orchestrator | 2026-03-25 05:45:44.755563 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 05:45:44.755574 | orchestrator | Wednesday 25 March 2026 05:45:14 +0000 (0:00:01.190) 0:37:31.295 ******* 2026-03-25 05:45:44.755585 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755595 | orchestrator | 2026-03-25 05:45:44.755606 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 05:45:44.755617 | orchestrator | Wednesday 25 March 2026 05:45:15 +0000 (0:00:01.341) 0:37:32.637 ******* 2026-03-25 05:45:44.755628 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755639 | orchestrator | 2026-03-25 05:45:44.755650 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 05:45:44.755660 | orchestrator | Wednesday 25 March 2026 05:45:16 +0000 (0:00:01.191) 0:37:33.828 ******* 2026-03-25 05:45:44.755671 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755689 | orchestrator | 2026-03-25 05:45:44.755700 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 05:45:44.755722 | orchestrator | Wednesday 25 March 2026 05:45:18 +0000 (0:00:01.194) 0:37:35.023 ******* 2026-03-25 05:45:44.755733 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755743 | orchestrator | 2026-03-25 05:45:44.755754 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 05:45:44.755765 | orchestrator | Wednesday 25 March 2026 05:45:19 +0000 (0:00:01.154) 0:37:36.177 ******* 2026-03-25 05:45:44.755776 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755786 | orchestrator | 2026-03-25 05:45:44.755797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 05:45:44.755808 | orchestrator | Wednesday 25 March 2026 05:45:20 +0000 (0:00:01.156) 0:37:37.333 ******* 2026-03-25 05:45:44.755818 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755829 | orchestrator | 2026-03-25 05:45:44.755858 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 05:45:44.755870 | orchestrator | Wednesday 25 March 2026 05:45:21 +0000 (0:00:01.164) 0:37:38.498 ******* 2026-03-25 05:45:44.755881 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755891 | orchestrator | 2026-03-25 05:45:44.755902 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 05:45:44.755913 | orchestrator | Wednesday 25 March 2026 05:45:22 +0000 (0:00:01.186) 0:37:39.685 ******* 2026-03-25 05:45:44.755924 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.755935 | orchestrator | 2026-03-25 05:45:44.755945 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 05:45:44.755956 | orchestrator | Wednesday 25 March 2026 05:45:23 +0000 (0:00:01.162) 0:37:40.847 ******* 2026-03-25 05:45:44.755967 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:45:44.755985 | orchestrator | 2026-03-25 05:45:44.755996 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:45:44.756007 | orchestrator | Wednesday 25 March 2026 05:45:24 +0000 (0:00:01.139) 0:37:41.987 ******* 2026-03-25 05:45:44.756018 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-25 05:45:44.756029 | orchestrator | 2026-03-25 05:45:44.756040 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 05:45:44.756051 | orchestrator | Wednesday 25 March 2026 05:45:26 +0000 (0:00:01.130) 0:37:43.117 ******* 2026-03-25 05:45:44.756062 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-25 05:45:44.756073 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-25 05:45:44.756084 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-25 05:45:44.756095 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-25 05:45:44.756105 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-25 05:45:44.756116 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-25 05:45:44.756127 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-25 05:45:44.756137 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-25 05:45:44.756148 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 05:45:44.756159 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 05:45:44.756170 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 05:45:44.756181 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 05:45:44.756191 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 05:45:44.756202 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 05:45:44.756213 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-25 05:45:44.756224 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-25 05:45:44.756235 | orchestrator | 2026-03-25 05:45:44.756246 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:45:44.756256 | orchestrator | Wednesday 25 March 2026 05:45:32 +0000 (0:00:06.560) 0:37:49.678 ******* 2026-03-25 05:45:44.756267 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-25 05:45:44.756278 | orchestrator | 2026-03-25 05:45:44.756289 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-25 05:45:44.756299 | orchestrator | Wednesday 25 March 2026 05:45:34 +0000 (0:00:01.657) 0:37:51.335 ******* 2026-03-25 05:45:44.756310 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 05:45:44.756322 | orchestrator | 2026-03-25 05:45:44.756333 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-25 05:45:44.756344 | orchestrator | Wednesday 25 March 2026 05:45:35 +0000 (0:00:01.524) 0:37:52.860 ******* 2026-03-25 05:45:44.756388 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 05:45:44.756402 | orchestrator | 2026-03-25 05:45:44.756413 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:45:44.756424 | orchestrator | Wednesday 25 March 2026 05:45:37 +0000 (0:00:01.980) 0:37:54.841 ******* 2026-03-25 05:45:44.756434 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.756445 | orchestrator | 2026-03-25 05:45:44.756456 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:45:44.756466 | orchestrator | Wednesday 25 March 2026 05:45:38 +0000 (0:00:01.124) 0:37:55.966 ******* 2026-03-25 05:45:44.756477 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.756488 | orchestrator | 2026-03-25 05:45:44.756499 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:45:44.756517 | orchestrator | Wednesday 25 March 2026 05:45:40 +0000 (0:00:01.143) 0:37:57.110 ******* 2026-03-25 05:45:44.756534 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.756545 | orchestrator | 2026-03-25 05:45:44.756556 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:45:44.756567 | orchestrator | Wednesday 25 March 2026 05:45:41 +0000 (0:00:01.175) 0:37:58.285 ******* 2026-03-25 05:45:44.756577 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.756588 | orchestrator | 2026-03-25 05:45:44.756599 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:45:44.756609 | orchestrator | Wednesday 25 March 2026 05:45:42 +0000 (0:00:01.196) 0:37:59.482 ******* 2026-03-25 05:45:44.756620 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.756631 | orchestrator | 2026-03-25 05:45:44.756642 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:45:44.756653 | orchestrator | Wednesday 25 March 2026 05:45:43 +0000 (0:00:01.132) 0:38:00.614 ******* 2026-03-25 05:45:44.756664 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:45:44.756674 | orchestrator | 2026-03-25 05:45:44.756691 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:46:36.218333 | orchestrator | Wednesday 25 March 2026 05:45:44 +0000 (0:00:01.145) 0:38:01.760 ******* 2026-03-25 05:46:36.218464 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.218486 | orchestrator | 2026-03-25 05:46:36.218498 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:46:36.218509 | orchestrator | Wednesday 25 March 2026 05:45:45 +0000 (0:00:01.131) 0:38:02.891 ******* 2026-03-25 05:46:36.218518 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.218526 | orchestrator | 2026-03-25 05:46:36.218536 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:46:36.218545 | orchestrator | Wednesday 25 March 2026 05:45:46 +0000 (0:00:01.110) 0:38:04.001 ******* 2026-03-25 05:46:36.218553 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.218562 | orchestrator | 2026-03-25 05:46:36.218571 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:46:36.218579 | orchestrator | Wednesday 25 March 2026 05:45:48 +0000 (0:00:01.165) 0:38:05.167 ******* 2026-03-25 05:46:36.218588 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.218596 | orchestrator | 2026-03-25 05:46:36.218605 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:46:36.218614 | orchestrator | Wednesday 25 March 2026 05:45:49 +0000 (0:00:01.155) 0:38:06.323 ******* 2026-03-25 05:46:36.218622 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:46:36.218632 | orchestrator | 2026-03-25 05:46:36.218640 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:46:36.218649 | orchestrator | Wednesday 25 March 2026 05:45:50 +0000 (0:00:01.211) 0:38:07.534 ******* 2026-03-25 05:46:36.218658 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-25 05:46:36.218667 | orchestrator | 2026-03-25 05:46:36.218675 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:46:36.218684 | orchestrator | Wednesday 25 March 2026 05:45:55 +0000 (0:00:04.484) 0:38:12.019 ******* 2026-03-25 05:46:36.218694 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 05:46:36.218704 | orchestrator | 2026-03-25 05:46:36.218713 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:46:36.218721 | orchestrator | Wednesday 25 March 2026 05:45:56 +0000 (0:00:01.195) 0:38:13.214 ******* 2026-03-25 05:46:36.218732 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-25 05:46:36.218770 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-25 05:46:36.218781 | orchestrator | 2026-03-25 05:46:36.218789 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:46:36.218798 | orchestrator | Wednesday 25 March 2026 05:46:04 +0000 (0:00:07.850) 0:38:21.065 ******* 2026-03-25 05:46:36.218807 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.218816 | orchestrator | 2026-03-25 05:46:36.218824 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:46:36.218833 | orchestrator | Wednesday 25 March 2026 05:46:05 +0000 (0:00:01.136) 0:38:22.201 ******* 2026-03-25 05:46:36.218841 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.218851 | orchestrator | 2026-03-25 05:46:36.218861 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:46:36.218871 | orchestrator | Wednesday 25 March 2026 05:46:06 +0000 (0:00:01.171) 0:38:23.373 ******* 2026-03-25 05:46:36.218881 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.218890 | orchestrator | 2026-03-25 05:46:36.218900 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:46:36.218910 | orchestrator | Wednesday 25 March 2026 05:46:07 +0000 (0:00:01.177) 0:38:24.551 ******* 2026-03-25 05:46:36.218920 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.218929 | orchestrator | 2026-03-25 05:46:36.218937 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:46:36.218960 | orchestrator | Wednesday 25 March 2026 05:46:08 +0000 (0:00:01.173) 0:38:25.725 ******* 2026-03-25 05:46:36.218969 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.218977 | orchestrator | 2026-03-25 05:46:36.218986 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:46:36.218994 | orchestrator | Wednesday 25 March 2026 05:46:09 +0000 (0:00:01.156) 0:38:26.882 ******* 2026-03-25 05:46:36.219003 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:46:36.219011 | orchestrator | 2026-03-25 05:46:36.219020 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:46:36.219028 | orchestrator | Wednesday 25 March 2026 05:46:11 +0000 (0:00:01.280) 0:38:28.162 ******* 2026-03-25 05:46:36.219036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 05:46:36.219045 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 05:46:36.219054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 05:46:36.219062 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.219071 | orchestrator | 2026-03-25 05:46:36.219079 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:46:36.219104 | orchestrator | Wednesday 25 March 2026 05:46:13 +0000 (0:00:01.939) 0:38:30.102 ******* 2026-03-25 05:46:36.219113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 05:46:36.219121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 05:46:36.219130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 05:46:36.219138 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.219147 | orchestrator | 2026-03-25 05:46:36.219155 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:46:36.219164 | orchestrator | Wednesday 25 March 2026 05:46:14 +0000 (0:00:01.796) 0:38:31.899 ******* 2026-03-25 05:46:36.219172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 05:46:36.219181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 05:46:36.219197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 05:46:36.219206 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.219214 | orchestrator | 2026-03-25 05:46:36.219223 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:46:36.219232 | orchestrator | Wednesday 25 March 2026 05:46:16 +0000 (0:00:01.944) 0:38:33.843 ******* 2026-03-25 05:46:36.219240 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:46:36.219249 | orchestrator | 2026-03-25 05:46:36.219257 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:46:36.219266 | orchestrator | Wednesday 25 March 2026 05:46:17 +0000 (0:00:01.156) 0:38:34.999 ******* 2026-03-25 05:46:36.219274 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 05:46:36.219283 | orchestrator | 2026-03-25 05:46:36.219310 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:46:36.219319 | orchestrator | Wednesday 25 March 2026 05:46:19 +0000 (0:00:01.373) 0:38:36.373 ******* 2026-03-25 05:46:36.219328 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:46:36.219337 | orchestrator | 2026-03-25 05:46:36.219345 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-25 05:46:36.219354 | orchestrator | Wednesday 25 March 2026 05:46:21 +0000 (0:00:01.746) 0:38:38.120 ******* 2026-03-25 05:46:36.219362 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:46:36.219371 | orchestrator | 2026-03-25 05:46:36.219380 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:46:36.219388 | orchestrator | Wednesday 25 March 2026 05:46:22 +0000 (0:00:01.143) 0:38:39.264 ******* 2026-03-25 05:46:36.219397 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:46:36.219406 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:46:36.219415 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:46:36.219423 | orchestrator | 2026-03-25 05:46:36.219432 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-25 05:46:36.219440 | orchestrator | Wednesday 25 March 2026 05:46:23 +0000 (0:00:01.664) 0:38:40.928 ******* 2026-03-25 05:46:36.219449 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-03-25 05:46:36.219457 | orchestrator | 2026-03-25 05:46:36.219466 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-25 05:46:36.219474 | orchestrator | Wednesday 25 March 2026 05:46:25 +0000 (0:00:01.543) 0:38:42.471 ******* 2026-03-25 05:46:36.219483 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.219492 | orchestrator | 2026-03-25 05:46:36.219500 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-25 05:46:36.219509 | orchestrator | Wednesday 25 March 2026 05:46:26 +0000 (0:00:01.155) 0:38:43.627 ******* 2026-03-25 05:46:36.219517 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.219526 | orchestrator | 2026-03-25 05:46:36.219535 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-25 05:46:36.219543 | orchestrator | Wednesday 25 March 2026 05:46:27 +0000 (0:00:01.133) 0:38:44.761 ******* 2026-03-25 05:46:36.219552 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:46:36.219561 | orchestrator | 2026-03-25 05:46:36.219569 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-25 05:46:36.219578 | orchestrator | Wednesday 25 March 2026 05:46:29 +0000 (0:00:01.499) 0:38:46.260 ******* 2026-03-25 05:46:36.219586 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:46:36.219595 | orchestrator | 2026-03-25 05:46:36.219603 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-25 05:46:36.219612 | orchestrator | Wednesday 25 March 2026 05:46:30 +0000 (0:00:01.226) 0:38:47.487 ******* 2026-03-25 05:46:36.219621 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-25 05:46:36.219629 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-25 05:46:36.219648 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-25 05:46:36.219657 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-25 05:46:36.219666 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-25 05:46:36.219674 | orchestrator | 2026-03-25 05:46:36.219683 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-25 05:46:36.219691 | orchestrator | Wednesday 25 March 2026 05:46:33 +0000 (0:00:03.040) 0:38:50.527 ******* 2026-03-25 05:46:36.219700 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:46:36.219709 | orchestrator | 2026-03-25 05:46:36.219717 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-25 05:46:36.219726 | orchestrator | Wednesday 25 March 2026 05:46:34 +0000 (0:00:01.179) 0:38:51.707 ******* 2026-03-25 05:46:36.219734 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-03-25 05:46:36.219743 | orchestrator | 2026-03-25 05:46:36.219752 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-25 05:47:44.782770 | orchestrator | Wednesday 25 March 2026 05:46:36 +0000 (0:00:01.517) 0:38:53.225 ******* 2026-03-25 05:47:44.782905 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-25 05:47:44.782919 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-25 05:47:44.782927 | orchestrator | 2026-03-25 05:47:44.782934 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-25 05:47:44.782940 | orchestrator | Wednesday 25 March 2026 05:46:37 +0000 (0:00:01.781) 0:38:55.006 ******* 2026-03-25 05:47:44.782949 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:47:44.782966 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 05:47:44.782978 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 05:47:44.782992 | orchestrator | 2026-03-25 05:47:44.783007 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-25 05:47:44.783022 | orchestrator | Wednesday 25 March 2026 05:46:41 +0000 (0:00:03.119) 0:38:58.126 ******* 2026-03-25 05:47:44.783037 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-25 05:47:44.783051 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 05:47:44.783066 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:47:44.783082 | orchestrator | 2026-03-25 05:47:44.783097 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-25 05:47:44.783112 | orchestrator | Wednesday 25 March 2026 05:46:43 +0000 (0:00:02.022) 0:39:00.149 ******* 2026-03-25 05:47:44.783126 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.783141 | orchestrator | 2026-03-25 05:47:44.783156 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-25 05:47:44.783171 | orchestrator | Wednesday 25 March 2026 05:46:44 +0000 (0:00:01.199) 0:39:01.348 ******* 2026-03-25 05:47:44.783180 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.783192 | orchestrator | 2026-03-25 05:47:44.783203 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-25 05:47:44.783236 | orchestrator | Wednesday 25 March 2026 05:46:45 +0000 (0:00:01.120) 0:39:02.470 ******* 2026-03-25 05:47:44.783244 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.783250 | orchestrator | 2026-03-25 05:47:44.783261 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-25 05:47:44.783274 | orchestrator | Wednesday 25 March 2026 05:46:46 +0000 (0:00:01.108) 0:39:03.578 ******* 2026-03-25 05:47:44.783285 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-03-25 05:47:44.783297 | orchestrator | 2026-03-25 05:47:44.783309 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-25 05:47:44.783320 | orchestrator | Wednesday 25 March 2026 05:46:48 +0000 (0:00:01.540) 0:39:05.119 ******* 2026-03-25 05:47:44.783363 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:47:44.783376 | orchestrator | 2026-03-25 05:47:44.783387 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-25 05:47:44.783398 | orchestrator | Wednesday 25 March 2026 05:46:49 +0000 (0:00:01.513) 0:39:06.632 ******* 2026-03-25 05:47:44.783409 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:47:44.783421 | orchestrator | 2026-03-25 05:47:44.783432 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-25 05:47:44.783442 | orchestrator | Wednesday 25 March 2026 05:46:53 +0000 (0:00:03.817) 0:39:10.450 ******* 2026-03-25 05:47:44.783453 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-03-25 05:47:44.783464 | orchestrator | 2026-03-25 05:47:44.783474 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-25 05:47:44.783485 | orchestrator | Wednesday 25 March 2026 05:46:54 +0000 (0:00:01.473) 0:39:11.924 ******* 2026-03-25 05:47:44.783491 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:47:44.783497 | orchestrator | 2026-03-25 05:47:44.783502 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-25 05:47:44.783508 | orchestrator | Wednesday 25 March 2026 05:46:56 +0000 (0:00:01.994) 0:39:13.918 ******* 2026-03-25 05:47:44.783513 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:47:44.783524 | orchestrator | 2026-03-25 05:47:44.783535 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-25 05:47:44.783545 | orchestrator | Wednesday 25 March 2026 05:46:58 +0000 (0:00:01.951) 0:39:15.869 ******* 2026-03-25 05:47:44.783556 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:47:44.783567 | orchestrator | 2026-03-25 05:47:44.783578 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-25 05:47:44.783590 | orchestrator | Wednesday 25 March 2026 05:47:01 +0000 (0:00:02.200) 0:39:18.070 ******* 2026-03-25 05:47:44.783602 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.783613 | orchestrator | 2026-03-25 05:47:44.783625 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-25 05:47:44.783635 | orchestrator | Wednesday 25 March 2026 05:47:02 +0000 (0:00:01.206) 0:39:19.276 ******* 2026-03-25 05:47:44.783663 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.783675 | orchestrator | 2026-03-25 05:47:44.783688 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-25 05:47:44.783699 | orchestrator | Wednesday 25 March 2026 05:47:03 +0000 (0:00:01.245) 0:39:20.522 ******* 2026-03-25 05:47:44.783710 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 05:47:44.783720 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-25 05:47:44.783730 | orchestrator | 2026-03-25 05:47:44.783740 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-25 05:47:44.783751 | orchestrator | Wednesday 25 March 2026 05:47:05 +0000 (0:00:01.817) 0:39:22.340 ******* 2026-03-25 05:47:44.783762 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 05:47:44.783772 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-25 05:47:44.783784 | orchestrator | 2026-03-25 05:47:44.783794 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-25 05:47:44.783805 | orchestrator | Wednesday 25 March 2026 05:47:08 +0000 (0:00:02.969) 0:39:25.309 ******* 2026-03-25 05:47:44.783816 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-25 05:47:44.783850 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-25 05:47:44.783863 | orchestrator | 2026-03-25 05:47:44.783874 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-25 05:47:44.783883 | orchestrator | Wednesday 25 March 2026 05:47:13 +0000 (0:00:04.727) 0:39:30.037 ******* 2026-03-25 05:47:44.783895 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.783905 | orchestrator | 2026-03-25 05:47:44.783915 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-25 05:47:44.783926 | orchestrator | Wednesday 25 March 2026 05:47:14 +0000 (0:00:01.224) 0:39:31.261 ******* 2026-03-25 05:47:44.783936 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.783956 | orchestrator | 2026-03-25 05:47:44.783969 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-25 05:47:44.783981 | orchestrator | Wednesday 25 March 2026 05:47:15 +0000 (0:00:01.231) 0:39:32.492 ******* 2026-03-25 05:47:44.783991 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.784002 | orchestrator | 2026-03-25 05:47:44.784013 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-25 05:47:44.784024 | orchestrator | Wednesday 25 March 2026 05:47:17 +0000 (0:00:01.816) 0:39:34.308 ******* 2026-03-25 05:47:44.784035 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.784047 | orchestrator | 2026-03-25 05:47:44.784057 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-25 05:47:44.784068 | orchestrator | Wednesday 25 March 2026 05:47:18 +0000 (0:00:01.123) 0:39:35.432 ******* 2026-03-25 05:47:44.784080 | orchestrator | skipping: [testbed-node-3] 2026-03-25 05:47:44.784091 | orchestrator | 2026-03-25 05:47:44.784102 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-25 05:47:44.784114 | orchestrator | Wednesday 25 March 2026 05:47:19 +0000 (0:00:01.206) 0:39:36.638 ******* 2026-03-25 05:47:44.784126 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-25 05:47:44.784139 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-25 05:47:44.784152 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-25 05:47:44.784163 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:47:44.784175 | orchestrator | 2026-03-25 05:47:44.784186 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-25 05:47:44.784198 | orchestrator | 2026-03-25 05:47:44.784209 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:47:44.784253 | orchestrator | Wednesday 25 March 2026 05:47:30 +0000 (0:00:10.932) 0:39:47.570 ******* 2026-03-25 05:47:44.784266 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-25 05:47:44.784276 | orchestrator | 2026-03-25 05:47:44.784287 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:47:44.784295 | orchestrator | Wednesday 25 March 2026 05:47:31 +0000 (0:00:01.180) 0:39:48.751 ******* 2026-03-25 05:47:44.784304 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:47:44.784314 | orchestrator | 2026-03-25 05:47:44.784323 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:47:44.784334 | orchestrator | Wednesday 25 March 2026 05:47:33 +0000 (0:00:01.438) 0:39:50.190 ******* 2026-03-25 05:47:44.784346 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:47:44.784357 | orchestrator | 2026-03-25 05:47:44.784368 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:47:44.784378 | orchestrator | Wednesday 25 March 2026 05:47:34 +0000 (0:00:01.193) 0:39:51.383 ******* 2026-03-25 05:47:44.784389 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:47:44.784399 | orchestrator | 2026-03-25 05:47:44.784410 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:47:44.784420 | orchestrator | Wednesday 25 March 2026 05:47:35 +0000 (0:00:01.507) 0:39:52.891 ******* 2026-03-25 05:47:44.784429 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:47:44.784438 | orchestrator | 2026-03-25 05:47:44.784448 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:47:44.784460 | orchestrator | Wednesday 25 March 2026 05:47:37 +0000 (0:00:01.134) 0:39:54.026 ******* 2026-03-25 05:47:44.784469 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:47:44.784479 | orchestrator | 2026-03-25 05:47:44.784489 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:47:44.784495 | orchestrator | Wednesday 25 March 2026 05:47:38 +0000 (0:00:01.143) 0:39:55.169 ******* 2026-03-25 05:47:44.784500 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:47:44.784513 | orchestrator | 2026-03-25 05:47:44.784519 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:47:44.784525 | orchestrator | Wednesday 25 March 2026 05:47:39 +0000 (0:00:01.202) 0:39:56.372 ******* 2026-03-25 05:47:44.784531 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:47:44.784540 | orchestrator | 2026-03-25 05:47:44.784558 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:47:44.784571 | orchestrator | Wednesday 25 March 2026 05:47:40 +0000 (0:00:01.206) 0:39:57.579 ******* 2026-03-25 05:47:44.784583 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:47:44.784595 | orchestrator | 2026-03-25 05:47:44.784604 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:47:44.784615 | orchestrator | Wednesday 25 March 2026 05:47:41 +0000 (0:00:01.133) 0:39:58.713 ******* 2026-03-25 05:47:44.784626 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:47:44.784637 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:47:44.784647 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:47:44.784659 | orchestrator | 2026-03-25 05:47:44.784670 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:47:44.784682 | orchestrator | Wednesday 25 March 2026 05:47:43 +0000 (0:00:01.811) 0:40:00.524 ******* 2026-03-25 05:47:44.784704 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:09.837346 | orchestrator | 2026-03-25 05:48:09.837469 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:48:09.837487 | orchestrator | Wednesday 25 March 2026 05:47:44 +0000 (0:00:01.265) 0:40:01.790 ******* 2026-03-25 05:48:09.837500 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:48:09.837512 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:48:09.837523 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:48:09.837534 | orchestrator | 2026-03-25 05:48:09.837545 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:48:09.837556 | orchestrator | Wednesday 25 March 2026 05:47:47 +0000 (0:00:02.921) 0:40:04.712 ******* 2026-03-25 05:48:09.837568 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 05:48:09.837579 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 05:48:09.837590 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 05:48:09.837601 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:09.837613 | orchestrator | 2026-03-25 05:48:09.837624 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:48:09.837635 | orchestrator | Wednesday 25 March 2026 05:47:49 +0000 (0:00:01.442) 0:40:06.154 ******* 2026-03-25 05:48:09.837647 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:48:09.837661 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:48:09.837672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:48:09.837684 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:09.837695 | orchestrator | 2026-03-25 05:48:09.837706 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:48:09.837718 | orchestrator | Wednesday 25 March 2026 05:47:50 +0000 (0:00:01.648) 0:40:07.803 ******* 2026-03-25 05:48:09.837755 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:09.837771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:09.837783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:09.837794 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:09.837805 | orchestrator | 2026-03-25 05:48:09.837816 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:48:09.837843 | orchestrator | Wednesday 25 March 2026 05:47:51 +0000 (0:00:01.180) 0:40:08.983 ******* 2026-03-25 05:48:09.837876 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:47:45.290181', 'end': '2026-03-25 05:47:45.343385', 'delta': '0:00:00.053204', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:48:09.837891 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:47:45.879623', 'end': '2026-03-25 05:47:45.933122', 'delta': '0:00:00.053499', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:48:09.837903 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:47:46.437272', 'end': '2026-03-25 05:47:46.489718', 'delta': '0:00:00.052446', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:48:09.837923 | orchestrator | 2026-03-25 05:48:09.837935 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:48:09.837946 | orchestrator | Wednesday 25 March 2026 05:47:53 +0000 (0:00:01.185) 0:40:10.169 ******* 2026-03-25 05:48:09.837957 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:09.837968 | orchestrator | 2026-03-25 05:48:09.837979 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:48:09.837990 | orchestrator | Wednesday 25 March 2026 05:47:54 +0000 (0:00:01.253) 0:40:11.423 ******* 2026-03-25 05:48:09.838001 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:09.838012 | orchestrator | 2026-03-25 05:48:09.838092 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:48:09.838103 | orchestrator | Wednesday 25 March 2026 05:47:55 +0000 (0:00:01.350) 0:40:12.773 ******* 2026-03-25 05:48:09.838114 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:09.838125 | orchestrator | 2026-03-25 05:48:09.838135 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:48:09.838146 | orchestrator | Wednesday 25 March 2026 05:47:56 +0000 (0:00:01.144) 0:40:13.918 ******* 2026-03-25 05:48:09.838156 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:48:09.838167 | orchestrator | 2026-03-25 05:48:09.838178 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:48:09.838212 | orchestrator | Wednesday 25 March 2026 05:48:00 +0000 (0:00:03.446) 0:40:17.365 ******* 2026-03-25 05:48:09.838224 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:09.838235 | orchestrator | 2026-03-25 05:48:09.838246 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:48:09.838256 | orchestrator | Wednesday 25 March 2026 05:48:01 +0000 (0:00:01.154) 0:40:18.520 ******* 2026-03-25 05:48:09.838267 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:09.838278 | orchestrator | 2026-03-25 05:48:09.838290 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:48:09.838308 | orchestrator | Wednesday 25 March 2026 05:48:02 +0000 (0:00:01.192) 0:40:19.712 ******* 2026-03-25 05:48:09.838326 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:09.838343 | orchestrator | 2026-03-25 05:48:09.838358 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:48:09.838373 | orchestrator | Wednesday 25 March 2026 05:48:03 +0000 (0:00:01.258) 0:40:20.971 ******* 2026-03-25 05:48:09.838390 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:09.838404 | orchestrator | 2026-03-25 05:48:09.838422 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:48:09.838446 | orchestrator | Wednesday 25 March 2026 05:48:05 +0000 (0:00:01.167) 0:40:22.139 ******* 2026-03-25 05:48:09.838458 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:09.838469 | orchestrator | 2026-03-25 05:48:09.838480 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:48:09.838490 | orchestrator | Wednesday 25 March 2026 05:48:06 +0000 (0:00:01.167) 0:40:23.306 ******* 2026-03-25 05:48:09.838501 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:09.838512 | orchestrator | 2026-03-25 05:48:09.838523 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:48:09.838534 | orchestrator | Wednesday 25 March 2026 05:48:07 +0000 (0:00:01.202) 0:40:24.509 ******* 2026-03-25 05:48:09.838544 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:09.838555 | orchestrator | 2026-03-25 05:48:09.838566 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:48:09.838577 | orchestrator | Wednesday 25 March 2026 05:48:08 +0000 (0:00:01.138) 0:40:25.647 ******* 2026-03-25 05:48:09.838587 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:09.838598 | orchestrator | 2026-03-25 05:48:09.838609 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:48:09.838628 | orchestrator | Wednesday 25 March 2026 05:48:09 +0000 (0:00:01.193) 0:40:26.841 ******* 2026-03-25 05:48:12.480949 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:12.481091 | orchestrator | 2026-03-25 05:48:12.481118 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:48:12.481139 | orchestrator | Wednesday 25 March 2026 05:48:11 +0000 (0:00:01.223) 0:40:28.064 ******* 2026-03-25 05:48:12.481160 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:12.481180 | orchestrator | 2026-03-25 05:48:12.481229 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:48:12.481250 | orchestrator | Wednesday 25 March 2026 05:48:12 +0000 (0:00:01.168) 0:40:29.233 ******* 2026-03-25 05:48:12.481265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:48:12.481347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'uuids': ['1a1bfadf-e219-47e2-8705-0963963507ec'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq']}})  2026-03-25 05:48:12.481362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e1f7d9f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:48:12.481375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f']}})  2026-03-25 05:48:12.481403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:48:12.481415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:48:12.481462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:48:12.481478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:48:12.481492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp', 'dm-uuid-CRYPT-LUKS2-d0a28742b6dc46aab152442a6244f51b-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:48:12.481505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:48:12.481519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'uuids': ['d0a28742-b6dc-46aa-b152-442a6244f51b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp']}})  2026-03-25 05:48:12.481532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138']}})  2026-03-25 05:48:12.481552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:48:12.481587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cb51c54', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:48:13.940045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:48:13.940168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:48:13.940229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq', 'dm-uuid-CRYPT-LUKS2-1a1bfadfe21947e287050963963507ec-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:48:13.940247 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:13.940276 | orchestrator | 2026-03-25 05:48:13.940301 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:48:13.940332 | orchestrator | Wednesday 25 March 2026 05:48:13 +0000 (0:00:01.471) 0:40:30.704 ******* 2026-03-25 05:48:13.940367 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:13.940380 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'uuids': ['1a1bfadf-e219-47e2-8705-0963963507ec'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:13.940394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e1f7d9f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:13.940425 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:13.940441 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:13.940465 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:13.940477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:13.940489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:13.940508 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp', 'dm-uuid-CRYPT-LUKS2-d0a28742b6dc46aab152442a6244f51b-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:19.406383 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:19.406533 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'uuids': ['d0a28742-b6dc-46aa-b152-442a6244f51b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:19.406587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:19.406606 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:19.406640 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cb51c54', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:19.406669 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:19.406682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:19.406694 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq', 'dm-uuid-CRYPT-LUKS2-1a1bfadfe21947e287050963963507ec-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:48:19.406706 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:48:19.406719 | orchestrator | 2026-03-25 05:48:19.406731 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:48:19.406744 | orchestrator | Wednesday 25 March 2026 05:48:15 +0000 (0:00:01.485) 0:40:32.189 ******* 2026-03-25 05:48:19.406755 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:19.406767 | orchestrator | 2026-03-25 05:48:19.406778 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:48:19.406788 | orchestrator | Wednesday 25 March 2026 05:48:16 +0000 (0:00:01.539) 0:40:33.729 ******* 2026-03-25 05:48:19.406799 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:19.406810 | orchestrator | 2026-03-25 05:48:19.406821 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:48:19.406831 | orchestrator | Wednesday 25 March 2026 05:48:17 +0000 (0:00:01.124) 0:40:34.854 ******* 2026-03-25 05:48:19.406842 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:48:19.406853 | orchestrator | 2026-03-25 05:48:19.406864 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:48:19.406881 | orchestrator | Wednesday 25 March 2026 05:48:19 +0000 (0:00:01.561) 0:40:36.416 ******* 2026-03-25 05:49:01.094580 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.094737 | orchestrator | 2026-03-25 05:49:01.094754 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:49:01.094768 | orchestrator | Wednesday 25 March 2026 05:48:20 +0000 (0:00:01.216) 0:40:37.632 ******* 2026-03-25 05:49:01.094779 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.094791 | orchestrator | 2026-03-25 05:49:01.094803 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:49:01.094844 | orchestrator | Wednesday 25 March 2026 05:48:21 +0000 (0:00:01.227) 0:40:38.860 ******* 2026-03-25 05:49:01.094856 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.094866 | orchestrator | 2026-03-25 05:49:01.094878 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:49:01.094890 | orchestrator | Wednesday 25 March 2026 05:48:23 +0000 (0:00:01.159) 0:40:40.021 ******* 2026-03-25 05:49:01.094901 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-25 05:49:01.094913 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-25 05:49:01.094923 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-25 05:49:01.094934 | orchestrator | 2026-03-25 05:49:01.094944 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:49:01.094955 | orchestrator | Wednesday 25 March 2026 05:48:24 +0000 (0:00:01.794) 0:40:41.815 ******* 2026-03-25 05:49:01.094966 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 05:49:01.094978 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 05:49:01.094989 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 05:49:01.095006 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.095024 | orchestrator | 2026-03-25 05:49:01.095043 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:49:01.095060 | orchestrator | Wednesday 25 March 2026 05:48:26 +0000 (0:00:01.289) 0:40:43.105 ******* 2026-03-25 05:49:01.095101 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-25 05:49:01.095122 | orchestrator | 2026-03-25 05:49:01.095172 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:49:01.095196 | orchestrator | Wednesday 25 March 2026 05:48:27 +0000 (0:00:01.145) 0:40:44.250 ******* 2026-03-25 05:49:01.095215 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.095233 | orchestrator | 2026-03-25 05:49:01.095253 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:49:01.095274 | orchestrator | Wednesday 25 March 2026 05:48:28 +0000 (0:00:01.169) 0:40:45.419 ******* 2026-03-25 05:49:01.095293 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.095310 | orchestrator | 2026-03-25 05:49:01.095324 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:49:01.095337 | orchestrator | Wednesday 25 March 2026 05:48:29 +0000 (0:00:01.181) 0:40:46.600 ******* 2026-03-25 05:49:01.095350 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.095362 | orchestrator | 2026-03-25 05:49:01.095375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:49:01.095388 | orchestrator | Wednesday 25 March 2026 05:48:30 +0000 (0:00:01.127) 0:40:47.728 ******* 2026-03-25 05:49:01.095401 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:01.095414 | orchestrator | 2026-03-25 05:49:01.095427 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:49:01.095438 | orchestrator | Wednesday 25 March 2026 05:48:31 +0000 (0:00:01.239) 0:40:48.968 ******* 2026-03-25 05:49:01.095449 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 05:49:01.095460 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 05:49:01.095471 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 05:49:01.095481 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.095492 | orchestrator | 2026-03-25 05:49:01.095503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:49:01.095513 | orchestrator | Wednesday 25 March 2026 05:48:33 +0000 (0:00:01.421) 0:40:50.390 ******* 2026-03-25 05:49:01.095524 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 05:49:01.095535 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 05:49:01.095545 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 05:49:01.095568 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.095579 | orchestrator | 2026-03-25 05:49:01.095590 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:49:01.095601 | orchestrator | Wednesday 25 March 2026 05:48:34 +0000 (0:00:01.371) 0:40:51.762 ******* 2026-03-25 05:49:01.095612 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 05:49:01.095622 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 05:49:01.095633 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 05:49:01.095644 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.095655 | orchestrator | 2026-03-25 05:49:01.095666 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:49:01.095677 | orchestrator | Wednesday 25 March 2026 05:48:36 +0000 (0:00:01.428) 0:40:53.190 ******* 2026-03-25 05:49:01.095688 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:01.095698 | orchestrator | 2026-03-25 05:49:01.095709 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:49:01.095720 | orchestrator | Wednesday 25 March 2026 05:48:37 +0000 (0:00:01.139) 0:40:54.330 ******* 2026-03-25 05:49:01.095731 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 05:49:01.095741 | orchestrator | 2026-03-25 05:49:01.095752 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:49:01.095763 | orchestrator | Wednesday 25 March 2026 05:48:38 +0000 (0:00:01.381) 0:40:55.712 ******* 2026-03-25 05:49:01.095796 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:49:01.095808 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:49:01.095819 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:49:01.095830 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:49:01.095841 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-25 05:49:01.095851 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:49:01.095862 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:49:01.095872 | orchestrator | 2026-03-25 05:49:01.095883 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:49:01.095894 | orchestrator | Wednesday 25 March 2026 05:48:40 +0000 (0:00:01.817) 0:40:57.529 ******* 2026-03-25 05:49:01.095905 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:49:01.095915 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:49:01.095926 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:49:01.095937 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:49:01.095948 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-25 05:49:01.095958 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:49:01.095969 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:49:01.095980 | orchestrator | 2026-03-25 05:49:01.095990 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-25 05:49:01.096008 | orchestrator | Wednesday 25 March 2026 05:48:42 +0000 (0:00:02.303) 0:40:59.833 ******* 2026-03-25 05:49:01.096020 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:01.096031 | orchestrator | 2026-03-25 05:49:01.096041 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-25 05:49:01.096052 | orchestrator | Wednesday 25 March 2026 05:48:43 +0000 (0:00:01.137) 0:41:00.971 ******* 2026-03-25 05:49:01.096063 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:01.096074 | orchestrator | 2026-03-25 05:49:01.096092 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-25 05:49:01.096102 | orchestrator | Wednesday 25 March 2026 05:48:44 +0000 (0:00:00.817) 0:41:01.788 ******* 2026-03-25 05:49:01.096113 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:01.096124 | orchestrator | 2026-03-25 05:49:01.096135 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-25 05:49:01.096179 | orchestrator | Wednesday 25 March 2026 05:48:45 +0000 (0:00:00.927) 0:41:02.716 ******* 2026-03-25 05:49:01.096191 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-25 05:49:01.096202 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-25 05:49:01.096213 | orchestrator | 2026-03-25 05:49:01.096224 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:49:01.096235 | orchestrator | Wednesday 25 March 2026 05:48:49 +0000 (0:00:03.776) 0:41:06.493 ******* 2026-03-25 05:49:01.096254 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-25 05:49:01.096273 | orchestrator | 2026-03-25 05:49:01.096292 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 05:49:01.096309 | orchestrator | Wednesday 25 March 2026 05:48:50 +0000 (0:00:01.263) 0:41:07.756 ******* 2026-03-25 05:49:01.096328 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-25 05:49:01.096345 | orchestrator | 2026-03-25 05:49:01.096363 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 05:49:01.096416 | orchestrator | Wednesday 25 March 2026 05:48:51 +0000 (0:00:01.141) 0:41:08.898 ******* 2026-03-25 05:49:01.096437 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.096455 | orchestrator | 2026-03-25 05:49:01.096472 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 05:49:01.096483 | orchestrator | Wednesday 25 March 2026 05:48:53 +0000 (0:00:01.158) 0:41:10.056 ******* 2026-03-25 05:49:01.096494 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:01.096504 | orchestrator | 2026-03-25 05:49:01.096515 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 05:49:01.096533 | orchestrator | Wednesday 25 March 2026 05:48:54 +0000 (0:00:01.502) 0:41:11.559 ******* 2026-03-25 05:49:01.096550 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:01.096569 | orchestrator | 2026-03-25 05:49:01.096588 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 05:49:01.096607 | orchestrator | Wednesday 25 March 2026 05:48:56 +0000 (0:00:01.560) 0:41:13.119 ******* 2026-03-25 05:49:01.096627 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:01.096646 | orchestrator | 2026-03-25 05:49:01.096666 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 05:49:01.096680 | orchestrator | Wednesday 25 March 2026 05:48:57 +0000 (0:00:01.570) 0:41:14.689 ******* 2026-03-25 05:49:01.096690 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.096701 | orchestrator | 2026-03-25 05:49:01.096712 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 05:49:01.096722 | orchestrator | Wednesday 25 March 2026 05:48:58 +0000 (0:00:01.134) 0:41:15.824 ******* 2026-03-25 05:49:01.096733 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.096744 | orchestrator | 2026-03-25 05:49:01.096754 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 05:49:01.096765 | orchestrator | Wednesday 25 March 2026 05:48:59 +0000 (0:00:01.153) 0:41:16.977 ******* 2026-03-25 05:49:01.096776 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:01.096786 | orchestrator | 2026-03-25 05:49:01.096807 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 05:49:41.550526 | orchestrator | Wednesday 25 March 2026 05:49:01 +0000 (0:00:01.119) 0:41:18.097 ******* 2026-03-25 05:49:41.550648 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.550665 | orchestrator | 2026-03-25 05:49:41.550678 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 05:49:41.550715 | orchestrator | Wednesday 25 March 2026 05:49:02 +0000 (0:00:01.543) 0:41:19.640 ******* 2026-03-25 05:49:41.550726 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.550737 | orchestrator | 2026-03-25 05:49:41.550749 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 05:49:41.550759 | orchestrator | Wednesday 25 March 2026 05:49:04 +0000 (0:00:01.562) 0:41:21.203 ******* 2026-03-25 05:49:41.550770 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.550782 | orchestrator | 2026-03-25 05:49:41.550793 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:49:41.550803 | orchestrator | Wednesday 25 March 2026 05:49:04 +0000 (0:00:00.808) 0:41:22.011 ******* 2026-03-25 05:49:41.550814 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.550825 | orchestrator | 2026-03-25 05:49:41.550835 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:49:41.550846 | orchestrator | Wednesday 25 March 2026 05:49:05 +0000 (0:00:00.786) 0:41:22.797 ******* 2026-03-25 05:49:41.550856 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.550867 | orchestrator | 2026-03-25 05:49:41.550877 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:49:41.550888 | orchestrator | Wednesday 25 March 2026 05:49:06 +0000 (0:00:00.807) 0:41:23.605 ******* 2026-03-25 05:49:41.550898 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.550909 | orchestrator | 2026-03-25 05:49:41.550919 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:49:41.550930 | orchestrator | Wednesday 25 March 2026 05:49:07 +0000 (0:00:00.798) 0:41:24.404 ******* 2026-03-25 05:49:41.550941 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.550951 | orchestrator | 2026-03-25 05:49:41.550961 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:49:41.550986 | orchestrator | Wednesday 25 March 2026 05:49:08 +0000 (0:00:00.829) 0:41:25.234 ******* 2026-03-25 05:49:41.550998 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551008 | orchestrator | 2026-03-25 05:49:41.551019 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:49:41.551030 | orchestrator | Wednesday 25 March 2026 05:49:09 +0000 (0:00:00.785) 0:41:26.019 ******* 2026-03-25 05:49:41.551041 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551053 | orchestrator | 2026-03-25 05:49:41.551066 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:49:41.551078 | orchestrator | Wednesday 25 March 2026 05:49:09 +0000 (0:00:00.780) 0:41:26.800 ******* 2026-03-25 05:49:41.551090 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551102 | orchestrator | 2026-03-25 05:49:41.551143 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:49:41.551156 | orchestrator | Wednesday 25 March 2026 05:49:10 +0000 (0:00:00.821) 0:41:27.622 ******* 2026-03-25 05:49:41.551167 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.551179 | orchestrator | 2026-03-25 05:49:41.551191 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:49:41.551203 | orchestrator | Wednesday 25 March 2026 05:49:11 +0000 (0:00:00.819) 0:41:28.441 ******* 2026-03-25 05:49:41.551215 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.551227 | orchestrator | 2026-03-25 05:49:41.551239 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:49:41.551251 | orchestrator | Wednesday 25 March 2026 05:49:12 +0000 (0:00:00.848) 0:41:29.290 ******* 2026-03-25 05:49:41.551263 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551276 | orchestrator | 2026-03-25 05:49:41.551287 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:49:41.551298 | orchestrator | Wednesday 25 March 2026 05:49:13 +0000 (0:00:00.790) 0:41:30.081 ******* 2026-03-25 05:49:41.551308 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551319 | orchestrator | 2026-03-25 05:49:41.551329 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:49:41.551348 | orchestrator | Wednesday 25 March 2026 05:49:13 +0000 (0:00:00.811) 0:41:30.892 ******* 2026-03-25 05:49:41.551359 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551369 | orchestrator | 2026-03-25 05:49:41.551380 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:49:41.551390 | orchestrator | Wednesday 25 March 2026 05:49:14 +0000 (0:00:00.771) 0:41:31.664 ******* 2026-03-25 05:49:41.551401 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551412 | orchestrator | 2026-03-25 05:49:41.551422 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:49:41.551433 | orchestrator | Wednesday 25 March 2026 05:49:15 +0000 (0:00:00.863) 0:41:32.527 ******* 2026-03-25 05:49:41.551443 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551454 | orchestrator | 2026-03-25 05:49:41.551464 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:49:41.551475 | orchestrator | Wednesday 25 March 2026 05:49:16 +0000 (0:00:00.797) 0:41:33.325 ******* 2026-03-25 05:49:41.551485 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551496 | orchestrator | 2026-03-25 05:49:41.551506 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:49:41.551539 | orchestrator | Wednesday 25 March 2026 05:49:17 +0000 (0:00:00.788) 0:41:34.113 ******* 2026-03-25 05:49:41.551551 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551561 | orchestrator | 2026-03-25 05:49:41.551572 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:49:41.551583 | orchestrator | Wednesday 25 March 2026 05:49:17 +0000 (0:00:00.775) 0:41:34.889 ******* 2026-03-25 05:49:41.551594 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551604 | orchestrator | 2026-03-25 05:49:41.551615 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:49:41.551626 | orchestrator | Wednesday 25 March 2026 05:49:18 +0000 (0:00:00.749) 0:41:35.639 ******* 2026-03-25 05:49:41.551654 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551666 | orchestrator | 2026-03-25 05:49:41.551676 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:49:41.551687 | orchestrator | Wednesday 25 March 2026 05:49:19 +0000 (0:00:00.821) 0:41:36.461 ******* 2026-03-25 05:49:41.551698 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551709 | orchestrator | 2026-03-25 05:49:41.551719 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:49:41.551730 | orchestrator | Wednesday 25 March 2026 05:49:20 +0000 (0:00:00.798) 0:41:37.259 ******* 2026-03-25 05:49:41.551741 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551751 | orchestrator | 2026-03-25 05:49:41.551762 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:49:41.551773 | orchestrator | Wednesday 25 March 2026 05:49:21 +0000 (0:00:00.802) 0:41:38.061 ******* 2026-03-25 05:49:41.551783 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.551796 | orchestrator | 2026-03-25 05:49:41.551814 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:49:41.551833 | orchestrator | Wednesday 25 March 2026 05:49:21 +0000 (0:00:00.804) 0:41:38.865 ******* 2026-03-25 05:49:41.551850 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.551868 | orchestrator | 2026-03-25 05:49:41.551887 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:49:41.551906 | orchestrator | Wednesday 25 March 2026 05:49:23 +0000 (0:00:01.669) 0:41:40.535 ******* 2026-03-25 05:49:41.551924 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.551943 | orchestrator | 2026-03-25 05:49:41.551955 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:49:41.551967 | orchestrator | Wednesday 25 March 2026 05:49:25 +0000 (0:00:01.855) 0:41:42.390 ******* 2026-03-25 05:49:41.551986 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-25 05:49:41.552004 | orchestrator | 2026-03-25 05:49:41.552022 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 05:49:41.552065 | orchestrator | Wednesday 25 March 2026 05:49:26 +0000 (0:00:01.350) 0:41:43.741 ******* 2026-03-25 05:49:41.552078 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.552089 | orchestrator | 2026-03-25 05:49:41.552100 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 05:49:41.552130 | orchestrator | Wednesday 25 March 2026 05:49:27 +0000 (0:00:01.152) 0:41:44.894 ******* 2026-03-25 05:49:41.552141 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.552151 | orchestrator | 2026-03-25 05:49:41.552162 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 05:49:41.552173 | orchestrator | Wednesday 25 March 2026 05:49:29 +0000 (0:00:01.145) 0:41:46.039 ******* 2026-03-25 05:49:41.552184 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 05:49:41.552195 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 05:49:41.552205 | orchestrator | 2026-03-25 05:49:41.552216 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 05:49:41.552226 | orchestrator | Wednesday 25 March 2026 05:49:30 +0000 (0:00:01.831) 0:41:47.870 ******* 2026-03-25 05:49:41.552237 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.552248 | orchestrator | 2026-03-25 05:49:41.552258 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 05:49:41.552269 | orchestrator | Wednesday 25 March 2026 05:49:32 +0000 (0:00:01.467) 0:41:49.338 ******* 2026-03-25 05:49:41.552279 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.552290 | orchestrator | 2026-03-25 05:49:41.552301 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 05:49:41.552311 | orchestrator | Wednesday 25 March 2026 05:49:33 +0000 (0:00:01.161) 0:41:50.500 ******* 2026-03-25 05:49:41.552322 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.552332 | orchestrator | 2026-03-25 05:49:41.552343 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:49:41.552353 | orchestrator | Wednesday 25 March 2026 05:49:34 +0000 (0:00:00.832) 0:41:51.332 ******* 2026-03-25 05:49:41.552364 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.552375 | orchestrator | 2026-03-25 05:49:41.552385 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:49:41.552396 | orchestrator | Wednesday 25 March 2026 05:49:35 +0000 (0:00:00.798) 0:41:52.131 ******* 2026-03-25 05:49:41.552406 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-25 05:49:41.552417 | orchestrator | 2026-03-25 05:49:41.552427 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 05:49:41.552438 | orchestrator | Wednesday 25 March 2026 05:49:36 +0000 (0:00:01.100) 0:41:53.232 ******* 2026-03-25 05:49:41.552448 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:49:41.552459 | orchestrator | 2026-03-25 05:49:41.552470 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 05:49:41.552480 | orchestrator | Wednesday 25 March 2026 05:49:37 +0000 (0:00:01.695) 0:41:54.927 ******* 2026-03-25 05:49:41.552491 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 05:49:41.552501 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 05:49:41.552512 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 05:49:41.552522 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.552533 | orchestrator | 2026-03-25 05:49:41.552543 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 05:49:41.552554 | orchestrator | Wednesday 25 March 2026 05:49:39 +0000 (0:00:01.200) 0:41:56.128 ******* 2026-03-25 05:49:41.552565 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:49:41.552575 | orchestrator | 2026-03-25 05:49:41.552586 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 05:49:41.552604 | orchestrator | Wednesday 25 March 2026 05:49:40 +0000 (0:00:01.174) 0:41:57.303 ******* 2026-03-25 05:49:41.552624 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.249506 | orchestrator | 2026-03-25 05:50:24.249640 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 05:50:24.249657 | orchestrator | Wednesday 25 March 2026 05:49:41 +0000 (0:00:01.255) 0:41:58.558 ******* 2026-03-25 05:50:24.249669 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.249682 | orchestrator | 2026-03-25 05:50:24.249693 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 05:50:24.249704 | orchestrator | Wednesday 25 March 2026 05:49:42 +0000 (0:00:01.179) 0:41:59.737 ******* 2026-03-25 05:50:24.249715 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.249726 | orchestrator | 2026-03-25 05:50:24.249737 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 05:50:24.249748 | orchestrator | Wednesday 25 March 2026 05:49:43 +0000 (0:00:01.222) 0:42:00.960 ******* 2026-03-25 05:50:24.249759 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.249770 | orchestrator | 2026-03-25 05:50:24.249781 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:50:24.249791 | orchestrator | Wednesday 25 March 2026 05:49:44 +0000 (0:00:00.781) 0:42:01.741 ******* 2026-03-25 05:50:24.249802 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:50:24.249814 | orchestrator | 2026-03-25 05:50:24.249825 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:50:24.249836 | orchestrator | Wednesday 25 March 2026 05:49:46 +0000 (0:00:02.113) 0:42:03.854 ******* 2026-03-25 05:50:24.249846 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:50:24.249857 | orchestrator | 2026-03-25 05:50:24.249868 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:50:24.249879 | orchestrator | Wednesday 25 March 2026 05:49:47 +0000 (0:00:00.793) 0:42:04.648 ******* 2026-03-25 05:50:24.249889 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-25 05:50:24.249900 | orchestrator | 2026-03-25 05:50:24.249911 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 05:50:24.249938 | orchestrator | Wednesday 25 March 2026 05:49:48 +0000 (0:00:01.133) 0:42:05.782 ******* 2026-03-25 05:50:24.249949 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.249960 | orchestrator | 2026-03-25 05:50:24.249970 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 05:50:24.249981 | orchestrator | Wednesday 25 March 2026 05:49:49 +0000 (0:00:01.178) 0:42:06.961 ******* 2026-03-25 05:50:24.249992 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.250003 | orchestrator | 2026-03-25 05:50:24.250013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 05:50:24.250110 | orchestrator | Wednesday 25 March 2026 05:49:51 +0000 (0:00:01.179) 0:42:08.140 ******* 2026-03-25 05:50:24.250123 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.250136 | orchestrator | 2026-03-25 05:50:24.250148 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 05:50:24.250160 | orchestrator | Wednesday 25 March 2026 05:49:52 +0000 (0:00:01.215) 0:42:09.356 ******* 2026-03-25 05:50:24.250172 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.250184 | orchestrator | 2026-03-25 05:50:24.250197 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 05:50:24.250210 | orchestrator | Wednesday 25 March 2026 05:49:53 +0000 (0:00:01.145) 0:42:10.501 ******* 2026-03-25 05:50:24.250222 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.250234 | orchestrator | 2026-03-25 05:50:24.250246 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 05:50:24.250259 | orchestrator | Wednesday 25 March 2026 05:49:54 +0000 (0:00:01.160) 0:42:11.661 ******* 2026-03-25 05:50:24.250285 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.250308 | orchestrator | 2026-03-25 05:50:24.250346 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 05:50:24.250359 | orchestrator | Wednesday 25 March 2026 05:49:55 +0000 (0:00:01.191) 0:42:12.853 ******* 2026-03-25 05:50:24.250370 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.250383 | orchestrator | 2026-03-25 05:50:24.250395 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 05:50:24.250407 | orchestrator | Wednesday 25 March 2026 05:49:57 +0000 (0:00:01.188) 0:42:14.041 ******* 2026-03-25 05:50:24.250418 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.250428 | orchestrator | 2026-03-25 05:50:24.250439 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 05:50:24.250449 | orchestrator | Wednesday 25 March 2026 05:49:58 +0000 (0:00:01.142) 0:42:15.184 ******* 2026-03-25 05:50:24.250460 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:50:24.250470 | orchestrator | 2026-03-25 05:50:24.250481 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:50:24.250491 | orchestrator | Wednesday 25 March 2026 05:49:59 +0000 (0:00:00.858) 0:42:16.042 ******* 2026-03-25 05:50:24.250502 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-25 05:50:24.250513 | orchestrator | 2026-03-25 05:50:24.250523 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 05:50:24.250534 | orchestrator | Wednesday 25 March 2026 05:50:00 +0000 (0:00:01.111) 0:42:17.154 ******* 2026-03-25 05:50:24.250545 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-25 05:50:24.250556 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-25 05:50:24.250566 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-25 05:50:24.250577 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-25 05:50:24.250587 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-25 05:50:24.250598 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-25 05:50:24.250608 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-25 05:50:24.250618 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-25 05:50:24.250629 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 05:50:24.250659 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 05:50:24.250670 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 05:50:24.250681 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 05:50:24.250691 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 05:50:24.250702 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 05:50:24.250713 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-25 05:50:24.250723 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-25 05:50:24.250734 | orchestrator | 2026-03-25 05:50:24.250745 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:50:24.250755 | orchestrator | Wednesday 25 March 2026 05:50:06 +0000 (0:00:06.251) 0:42:23.405 ******* 2026-03-25 05:50:24.250766 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-25 05:50:24.250776 | orchestrator | 2026-03-25 05:50:24.250787 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-25 05:50:24.250798 | orchestrator | Wednesday 25 March 2026 05:50:07 +0000 (0:00:01.126) 0:42:24.532 ******* 2026-03-25 05:50:24.250809 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 05:50:24.250821 | orchestrator | 2026-03-25 05:50:24.250831 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-25 05:50:24.250842 | orchestrator | Wednesday 25 March 2026 05:50:08 +0000 (0:00:01.463) 0:42:25.995 ******* 2026-03-25 05:50:24.250852 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 05:50:24.250872 | orchestrator | 2026-03-25 05:50:24.250882 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:50:24.250899 | orchestrator | Wednesday 25 March 2026 05:50:10 +0000 (0:00:01.621) 0:42:27.616 ******* 2026-03-25 05:50:24.250911 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.250921 | orchestrator | 2026-03-25 05:50:24.250932 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:50:24.250942 | orchestrator | Wednesday 25 March 2026 05:50:11 +0000 (0:00:00.761) 0:42:28.378 ******* 2026-03-25 05:50:24.250953 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.250963 | orchestrator | 2026-03-25 05:50:24.250974 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:50:24.250985 | orchestrator | Wednesday 25 March 2026 05:50:12 +0000 (0:00:00.761) 0:42:29.139 ******* 2026-03-25 05:50:24.250995 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.251006 | orchestrator | 2026-03-25 05:50:24.251016 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:50:24.251027 | orchestrator | Wednesday 25 March 2026 05:50:12 +0000 (0:00:00.819) 0:42:29.959 ******* 2026-03-25 05:50:24.251037 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.251048 | orchestrator | 2026-03-25 05:50:24.251059 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:50:24.251069 | orchestrator | Wednesday 25 March 2026 05:50:13 +0000 (0:00:00.809) 0:42:30.769 ******* 2026-03-25 05:50:24.251106 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.251116 | orchestrator | 2026-03-25 05:50:24.251127 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:50:24.251138 | orchestrator | Wednesday 25 March 2026 05:50:14 +0000 (0:00:00.823) 0:42:31.592 ******* 2026-03-25 05:50:24.251148 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.251159 | orchestrator | 2026-03-25 05:50:24.251169 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:50:24.251180 | orchestrator | Wednesday 25 March 2026 05:50:15 +0000 (0:00:00.787) 0:42:32.380 ******* 2026-03-25 05:50:24.251190 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.251201 | orchestrator | 2026-03-25 05:50:24.251211 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:50:24.251222 | orchestrator | Wednesday 25 March 2026 05:50:16 +0000 (0:00:00.770) 0:42:33.150 ******* 2026-03-25 05:50:24.251232 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.251243 | orchestrator | 2026-03-25 05:50:24.251253 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:50:24.251264 | orchestrator | Wednesday 25 March 2026 05:50:16 +0000 (0:00:00.804) 0:42:33.955 ******* 2026-03-25 05:50:24.251274 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.251285 | orchestrator | 2026-03-25 05:50:24.251295 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:50:24.251306 | orchestrator | Wednesday 25 March 2026 05:50:17 +0000 (0:00:00.780) 0:42:34.736 ******* 2026-03-25 05:50:24.251316 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:50:24.251326 | orchestrator | 2026-03-25 05:50:24.251337 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:50:24.251347 | orchestrator | Wednesday 25 March 2026 05:50:18 +0000 (0:00:00.770) 0:42:35.506 ******* 2026-03-25 05:50:24.251358 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:50:24.251368 | orchestrator | 2026-03-25 05:50:24.251379 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:50:24.251389 | orchestrator | Wednesday 25 March 2026 05:50:19 +0000 (0:00:00.884) 0:42:36.391 ******* 2026-03-25 05:50:24.251400 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-25 05:50:24.251410 | orchestrator | 2026-03-25 05:50:24.251428 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:50:24.251438 | orchestrator | Wednesday 25 March 2026 05:50:23 +0000 (0:00:04.030) 0:42:40.422 ******* 2026-03-25 05:50:24.251455 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 05:51:05.592211 | orchestrator | 2026-03-25 05:51:05.592328 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:51:05.592345 | orchestrator | Wednesday 25 March 2026 05:50:24 +0000 (0:00:00.836) 0:42:41.258 ******* 2026-03-25 05:51:05.592360 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-25 05:51:05.592374 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-25 05:51:05.592387 | orchestrator | 2026-03-25 05:51:05.592398 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:51:05.592409 | orchestrator | Wednesday 25 March 2026 05:50:31 +0000 (0:00:07.347) 0:42:48.606 ******* 2026-03-25 05:51:05.592483 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.592497 | orchestrator | 2026-03-25 05:51:05.592508 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:51:05.592519 | orchestrator | Wednesday 25 March 2026 05:50:32 +0000 (0:00:00.817) 0:42:49.423 ******* 2026-03-25 05:51:05.592530 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.592541 | orchestrator | 2026-03-25 05:51:05.592584 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:51:05.592598 | orchestrator | Wednesday 25 March 2026 05:50:33 +0000 (0:00:00.869) 0:42:50.292 ******* 2026-03-25 05:51:05.592609 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.592620 | orchestrator | 2026-03-25 05:51:05.592631 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:51:05.592642 | orchestrator | Wednesday 25 March 2026 05:50:34 +0000 (0:00:00.808) 0:42:51.100 ******* 2026-03-25 05:51:05.592652 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.592664 | orchestrator | 2026-03-25 05:51:05.592683 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:51:05.592702 | orchestrator | Wednesday 25 March 2026 05:50:34 +0000 (0:00:00.836) 0:42:51.937 ******* 2026-03-25 05:51:05.592717 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.592729 | orchestrator | 2026-03-25 05:51:05.592742 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:51:05.592756 | orchestrator | Wednesday 25 March 2026 05:50:35 +0000 (0:00:00.841) 0:42:52.779 ******* 2026-03-25 05:51:05.592768 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:51:05.592782 | orchestrator | 2026-03-25 05:51:05.592794 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:51:05.592807 | orchestrator | Wednesday 25 March 2026 05:50:36 +0000 (0:00:00.888) 0:42:53.668 ******* 2026-03-25 05:51:05.592820 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 05:51:05.592833 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 05:51:05.592846 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 05:51:05.592858 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.592870 | orchestrator | 2026-03-25 05:51:05.592883 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:51:05.592895 | orchestrator | Wednesday 25 March 2026 05:50:37 +0000 (0:00:01.120) 0:42:54.789 ******* 2026-03-25 05:51:05.592929 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 05:51:05.592942 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 05:51:05.592955 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 05:51:05.592967 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.592980 | orchestrator | 2026-03-25 05:51:05.592992 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:51:05.593005 | orchestrator | Wednesday 25 March 2026 05:50:38 +0000 (0:00:01.091) 0:42:55.880 ******* 2026-03-25 05:51:05.593017 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 05:51:05.593030 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 05:51:05.593100 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 05:51:05.593113 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.593124 | orchestrator | 2026-03-25 05:51:05.593135 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:51:05.593146 | orchestrator | Wednesday 25 March 2026 05:50:39 +0000 (0:00:01.117) 0:42:56.997 ******* 2026-03-25 05:51:05.593157 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:51:05.593168 | orchestrator | 2026-03-25 05:51:05.593178 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:51:05.593189 | orchestrator | Wednesday 25 March 2026 05:50:40 +0000 (0:00:00.827) 0:42:57.825 ******* 2026-03-25 05:51:05.593200 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 05:51:05.593211 | orchestrator | 2026-03-25 05:51:05.593222 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:51:05.593233 | orchestrator | Wednesday 25 March 2026 05:50:41 +0000 (0:00:01.021) 0:42:58.846 ******* 2026-03-25 05:51:05.593244 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:51:05.593255 | orchestrator | 2026-03-25 05:51:05.593266 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-25 05:51:05.593277 | orchestrator | Wednesday 25 March 2026 05:50:43 +0000 (0:00:01.524) 0:43:00.371 ******* 2026-03-25 05:51:05.593288 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:51:05.593299 | orchestrator | 2026-03-25 05:51:05.593329 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:51:05.593340 | orchestrator | Wednesday 25 March 2026 05:50:44 +0000 (0:00:00.841) 0:43:01.213 ******* 2026-03-25 05:51:05.593351 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:51:05.593363 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:51:05.593374 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:51:05.593385 | orchestrator | 2026-03-25 05:51:05.593396 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-25 05:51:05.593407 | orchestrator | Wednesday 25 March 2026 05:50:45 +0000 (0:00:01.402) 0:43:02.615 ******* 2026-03-25 05:51:05.593417 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-03-25 05:51:05.593428 | orchestrator | 2026-03-25 05:51:05.593439 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-25 05:51:05.593450 | orchestrator | Wednesday 25 March 2026 05:50:46 +0000 (0:00:01.148) 0:43:03.764 ******* 2026-03-25 05:51:05.593461 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.593472 | orchestrator | 2026-03-25 05:51:05.593483 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-25 05:51:05.593494 | orchestrator | Wednesday 25 March 2026 05:50:47 +0000 (0:00:01.153) 0:43:04.918 ******* 2026-03-25 05:51:05.593505 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.593515 | orchestrator | 2026-03-25 05:51:05.593526 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-25 05:51:05.593537 | orchestrator | Wednesday 25 March 2026 05:50:49 +0000 (0:00:01.166) 0:43:06.084 ******* 2026-03-25 05:51:05.593557 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:51:05.593568 | orchestrator | 2026-03-25 05:51:05.593585 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-25 05:51:05.593596 | orchestrator | Wednesday 25 March 2026 05:50:50 +0000 (0:00:01.419) 0:43:07.503 ******* 2026-03-25 05:51:05.593607 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:51:05.593618 | orchestrator | 2026-03-25 05:51:05.593629 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-25 05:51:05.593639 | orchestrator | Wednesday 25 March 2026 05:50:51 +0000 (0:00:01.140) 0:43:08.644 ******* 2026-03-25 05:51:05.593650 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-25 05:51:05.593662 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-25 05:51:05.593673 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-25 05:51:05.593684 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-25 05:51:05.593695 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-25 05:51:05.593705 | orchestrator | 2026-03-25 05:51:05.593716 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-25 05:51:05.593727 | orchestrator | Wednesday 25 March 2026 05:50:54 +0000 (0:00:02.502) 0:43:11.147 ******* 2026-03-25 05:51:05.593738 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.593749 | orchestrator | 2026-03-25 05:51:05.593759 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-25 05:51:05.593770 | orchestrator | Wednesday 25 March 2026 05:50:54 +0000 (0:00:00.768) 0:43:11.915 ******* 2026-03-25 05:51:05.593781 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-03-25 05:51:05.593792 | orchestrator | 2026-03-25 05:51:05.593803 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-25 05:51:05.593813 | orchestrator | Wednesday 25 March 2026 05:50:56 +0000 (0:00:01.186) 0:43:13.102 ******* 2026-03-25 05:51:05.593824 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-25 05:51:05.593835 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-25 05:51:05.593846 | orchestrator | 2026-03-25 05:51:05.593857 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-25 05:51:05.593868 | orchestrator | Wednesday 25 March 2026 05:50:57 +0000 (0:00:01.789) 0:43:14.892 ******* 2026-03-25 05:51:05.593879 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:51:05.593890 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-25 05:51:05.593901 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 05:51:05.593912 | orchestrator | 2026-03-25 05:51:05.593922 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-25 05:51:05.593933 | orchestrator | Wednesday 25 March 2026 05:51:01 +0000 (0:00:03.506) 0:43:18.398 ******* 2026-03-25 05:51:05.593944 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-25 05:51:05.593955 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-25 05:51:05.593966 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:51:05.593977 | orchestrator | 2026-03-25 05:51:05.593988 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-25 05:51:05.593998 | orchestrator | Wednesday 25 March 2026 05:51:03 +0000 (0:00:01.690) 0:43:20.088 ******* 2026-03-25 05:51:05.594009 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.594122 | orchestrator | 2026-03-25 05:51:05.594143 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-25 05:51:05.594155 | orchestrator | Wednesday 25 March 2026 05:51:04 +0000 (0:00:00.928) 0:43:21.021 ******* 2026-03-25 05:51:05.594165 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.594176 | orchestrator | 2026-03-25 05:51:05.594187 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-25 05:51:05.594212 | orchestrator | Wednesday 25 March 2026 05:51:04 +0000 (0:00:00.767) 0:43:21.789 ******* 2026-03-25 05:51:05.594223 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:51:05.594234 | orchestrator | 2026-03-25 05:51:05.594252 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-25 05:52:09.869231 | orchestrator | Wednesday 25 March 2026 05:51:05 +0000 (0:00:00.806) 0:43:22.595 ******* 2026-03-25 05:52:09.869350 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-03-25 05:52:09.869367 | orchestrator | 2026-03-25 05:52:09.869380 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-25 05:52:09.869391 | orchestrator | Wednesday 25 March 2026 05:51:06 +0000 (0:00:01.110) 0:43:23.705 ******* 2026-03-25 05:52:09.869402 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:52:09.869414 | orchestrator | 2026-03-25 05:52:09.869425 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-25 05:52:09.869437 | orchestrator | Wednesday 25 March 2026 05:51:08 +0000 (0:00:01.547) 0:43:25.253 ******* 2026-03-25 05:52:09.869448 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:52:09.869459 | orchestrator | 2026-03-25 05:52:09.869470 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-25 05:52:09.869481 | orchestrator | Wednesday 25 March 2026 05:51:11 +0000 (0:00:03.528) 0:43:28.782 ******* 2026-03-25 05:52:09.869492 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-03-25 05:52:09.869503 | orchestrator | 2026-03-25 05:52:09.869514 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-25 05:52:09.869524 | orchestrator | Wednesday 25 March 2026 05:51:12 +0000 (0:00:01.105) 0:43:29.888 ******* 2026-03-25 05:52:09.869535 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:52:09.869546 | orchestrator | 2026-03-25 05:52:09.869557 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-25 05:52:09.869567 | orchestrator | Wednesday 25 March 2026 05:51:14 +0000 (0:00:01.965) 0:43:31.853 ******* 2026-03-25 05:52:09.869578 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:52:09.869589 | orchestrator | 2026-03-25 05:52:09.869600 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-25 05:52:09.869626 | orchestrator | Wednesday 25 March 2026 05:51:16 +0000 (0:00:01.927) 0:43:33.781 ******* 2026-03-25 05:52:09.869638 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:52:09.869649 | orchestrator | 2026-03-25 05:52:09.869660 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-25 05:52:09.869671 | orchestrator | Wednesday 25 March 2026 05:51:19 +0000 (0:00:02.246) 0:43:36.028 ******* 2026-03-25 05:52:09.869682 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:52:09.869694 | orchestrator | 2026-03-25 05:52:09.869704 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-25 05:52:09.869715 | orchestrator | Wednesday 25 March 2026 05:51:20 +0000 (0:00:01.188) 0:43:37.217 ******* 2026-03-25 05:52:09.869726 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:52:09.869737 | orchestrator | 2026-03-25 05:52:09.869748 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-25 05:52:09.869759 | orchestrator | Wednesday 25 March 2026 05:51:21 +0000 (0:00:01.151) 0:43:38.369 ******* 2026-03-25 05:52:09.869772 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-25 05:52:09.869785 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-25 05:52:09.869797 | orchestrator | 2026-03-25 05:52:09.869810 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-25 05:52:09.869822 | orchestrator | Wednesday 25 March 2026 05:51:23 +0000 (0:00:01.889) 0:43:40.259 ******* 2026-03-25 05:52:09.869835 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-25 05:52:09.869848 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-25 05:52:09.869860 | orchestrator | 2026-03-25 05:52:09.869874 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-25 05:52:09.869912 | orchestrator | Wednesday 25 March 2026 05:51:26 +0000 (0:00:02.913) 0:43:43.173 ******* 2026-03-25 05:52:09.869924 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-25 05:52:09.869938 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-25 05:52:09.869950 | orchestrator | 2026-03-25 05:52:09.869963 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-25 05:52:09.869976 | orchestrator | Wednesday 25 March 2026 05:51:30 +0000 (0:00:04.166) 0:43:47.340 ******* 2026-03-25 05:52:09.869989 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:52:09.870087 | orchestrator | 2026-03-25 05:52:09.870103 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-25 05:52:09.870116 | orchestrator | Wednesday 25 March 2026 05:51:31 +0000 (0:00:00.946) 0:43:48.286 ******* 2026-03-25 05:52:09.870128 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:52:09.870141 | orchestrator | 2026-03-25 05:52:09.870153 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-25 05:52:09.870164 | orchestrator | Wednesday 25 March 2026 05:51:32 +0000 (0:00:00.897) 0:43:49.184 ******* 2026-03-25 05:52:09.870175 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:52:09.870186 | orchestrator | 2026-03-25 05:52:09.870197 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-25 05:52:09.870207 | orchestrator | Wednesday 25 March 2026 05:51:33 +0000 (0:00:00.923) 0:43:50.108 ******* 2026-03-25 05:52:09.870218 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:52:09.870229 | orchestrator | 2026-03-25 05:52:09.870240 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-25 05:52:09.870250 | orchestrator | Wednesday 25 March 2026 05:51:33 +0000 (0:00:00.819) 0:43:50.928 ******* 2026-03-25 05:52:09.870261 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:52:09.870272 | orchestrator | 2026-03-25 05:52:09.870283 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-25 05:52:09.870293 | orchestrator | Wednesday 25 March 2026 05:51:34 +0000 (0:00:00.796) 0:43:51.725 ******* 2026-03-25 05:52:09.870304 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-25 05:52:09.870316 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-25 05:52:09.870327 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-25 05:52:09.870358 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-25 05:52:09.870369 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:52:09.870380 | orchestrator | 2026-03-25 05:52:09.870391 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-25 05:52:09.870402 | orchestrator | 2026-03-25 05:52:09.870413 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:52:09.870424 | orchestrator | Wednesday 25 March 2026 05:51:48 +0000 (0:00:14.204) 0:44:05.930 ******* 2026-03-25 05:52:09.870435 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-25 05:52:09.870446 | orchestrator | 2026-03-25 05:52:09.870457 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:52:09.870467 | orchestrator | Wednesday 25 March 2026 05:51:50 +0000 (0:00:01.232) 0:44:07.162 ******* 2026-03-25 05:52:09.870478 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:09.870490 | orchestrator | 2026-03-25 05:52:09.870500 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:52:09.870511 | orchestrator | Wednesday 25 March 2026 05:51:51 +0000 (0:00:01.568) 0:44:08.730 ******* 2026-03-25 05:52:09.870522 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:09.870533 | orchestrator | 2026-03-25 05:52:09.870544 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:52:09.870555 | orchestrator | Wednesday 25 March 2026 05:51:52 +0000 (0:00:01.147) 0:44:09.878 ******* 2026-03-25 05:52:09.870575 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:09.870586 | orchestrator | 2026-03-25 05:52:09.870597 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:52:09.870608 | orchestrator | Wednesday 25 March 2026 05:51:54 +0000 (0:00:01.480) 0:44:11.358 ******* 2026-03-25 05:52:09.870619 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:09.870630 | orchestrator | 2026-03-25 05:52:09.870647 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:52:09.870658 | orchestrator | Wednesday 25 March 2026 05:51:55 +0000 (0:00:01.176) 0:44:12.535 ******* 2026-03-25 05:52:09.870669 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:09.870680 | orchestrator | 2026-03-25 05:52:09.870690 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:52:09.870701 | orchestrator | Wednesday 25 March 2026 05:51:56 +0000 (0:00:01.166) 0:44:13.701 ******* 2026-03-25 05:52:09.870711 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:09.870722 | orchestrator | 2026-03-25 05:52:09.870733 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:52:09.870744 | orchestrator | Wednesday 25 March 2026 05:51:57 +0000 (0:00:01.147) 0:44:14.848 ******* 2026-03-25 05:52:09.870754 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:09.870765 | orchestrator | 2026-03-25 05:52:09.870776 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:52:09.870787 | orchestrator | Wednesday 25 March 2026 05:51:58 +0000 (0:00:01.134) 0:44:15.983 ******* 2026-03-25 05:52:09.870798 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:09.870809 | orchestrator | 2026-03-25 05:52:09.870819 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:52:09.870830 | orchestrator | Wednesday 25 March 2026 05:52:00 +0000 (0:00:01.134) 0:44:17.118 ******* 2026-03-25 05:52:09.870841 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:52:09.870851 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:52:09.870862 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:52:09.870873 | orchestrator | 2026-03-25 05:52:09.870883 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:52:09.870894 | orchestrator | Wednesday 25 March 2026 05:52:02 +0000 (0:00:02.093) 0:44:19.212 ******* 2026-03-25 05:52:09.870905 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:09.870915 | orchestrator | 2026-03-25 05:52:09.870926 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:52:09.870937 | orchestrator | Wednesday 25 March 2026 05:52:03 +0000 (0:00:01.255) 0:44:20.467 ******* 2026-03-25 05:52:09.870948 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:52:09.870958 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:52:09.870969 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:52:09.870979 | orchestrator | 2026-03-25 05:52:09.870990 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:52:09.871018 | orchestrator | Wednesday 25 March 2026 05:52:06 +0000 (0:00:03.285) 0:44:23.752 ******* 2026-03-25 05:52:09.871030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-25 05:52:09.871041 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-25 05:52:09.871052 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-25 05:52:09.871063 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:09.871073 | orchestrator | 2026-03-25 05:52:09.871084 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:52:09.871095 | orchestrator | Wednesday 25 March 2026 05:52:08 +0000 (0:00:01.478) 0:44:25.231 ******* 2026-03-25 05:52:09.871108 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:52:09.871135 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:52:30.218876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:52:30.219042 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:30.219062 | orchestrator | 2026-03-25 05:52:30.219075 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:52:30.219087 | orchestrator | Wednesday 25 March 2026 05:52:09 +0000 (0:00:01.642) 0:44:26.874 ******* 2026-03-25 05:52:30.219101 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:30.219135 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:30.219146 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:30.219158 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:30.219169 | orchestrator | 2026-03-25 05:52:30.219180 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:52:30.219192 | orchestrator | Wednesday 25 March 2026 05:52:11 +0000 (0:00:01.184) 0:44:28.059 ******* 2026-03-25 05:52:30.219204 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:52:04.379124', 'end': '2026-03-25 05:52:04.417172', 'delta': '0:00:00.038048', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:52:30.219219 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:52:04.950040', 'end': '2026-03-25 05:52:05.007749', 'delta': '0:00:00.057709', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:52:30.219272 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:52:05.522212', 'end': '2026-03-25 05:52:05.564735', 'delta': '0:00:00.042523', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:52:30.219285 | orchestrator | 2026-03-25 05:52:30.219296 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:52:30.219307 | orchestrator | Wednesday 25 March 2026 05:52:12 +0000 (0:00:01.237) 0:44:29.296 ******* 2026-03-25 05:52:30.219318 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:30.219330 | orchestrator | 2026-03-25 05:52:30.219341 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:52:30.219352 | orchestrator | Wednesday 25 March 2026 05:52:13 +0000 (0:00:01.436) 0:44:30.733 ******* 2026-03-25 05:52:30.219362 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:30.219373 | orchestrator | 2026-03-25 05:52:30.219384 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:52:30.219395 | orchestrator | Wednesday 25 March 2026 05:52:14 +0000 (0:00:01.279) 0:44:32.012 ******* 2026-03-25 05:52:30.219405 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:30.219418 | orchestrator | 2026-03-25 05:52:30.219430 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:52:30.219442 | orchestrator | Wednesday 25 March 2026 05:52:16 +0000 (0:00:01.142) 0:44:33.155 ******* 2026-03-25 05:52:30.219454 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:52:30.219466 | orchestrator | 2026-03-25 05:52:30.219478 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:52:30.219491 | orchestrator | Wednesday 25 March 2026 05:52:18 +0000 (0:00:01.966) 0:44:35.122 ******* 2026-03-25 05:52:30.219503 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:30.219515 | orchestrator | 2026-03-25 05:52:30.219528 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:52:30.219546 | orchestrator | Wednesday 25 March 2026 05:52:19 +0000 (0:00:01.131) 0:44:36.253 ******* 2026-03-25 05:52:30.219558 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:30.219674 | orchestrator | 2026-03-25 05:52:30.219689 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:52:30.219703 | orchestrator | Wednesday 25 March 2026 05:52:20 +0000 (0:00:01.103) 0:44:37.357 ******* 2026-03-25 05:52:30.219715 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:30.219727 | orchestrator | 2026-03-25 05:52:30.219740 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:52:30.219752 | orchestrator | Wednesday 25 March 2026 05:52:21 +0000 (0:00:01.216) 0:44:38.574 ******* 2026-03-25 05:52:30.219765 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:30.219777 | orchestrator | 2026-03-25 05:52:30.219787 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:52:30.219798 | orchestrator | Wednesday 25 March 2026 05:52:22 +0000 (0:00:01.142) 0:44:39.717 ******* 2026-03-25 05:52:30.219809 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:30.219820 | orchestrator | 2026-03-25 05:52:30.219831 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:52:30.219842 | orchestrator | Wednesday 25 March 2026 05:52:23 +0000 (0:00:01.137) 0:44:40.854 ******* 2026-03-25 05:52:30.219863 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:30.219874 | orchestrator | 2026-03-25 05:52:30.219885 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:52:30.219896 | orchestrator | Wednesday 25 March 2026 05:52:25 +0000 (0:00:01.314) 0:44:42.168 ******* 2026-03-25 05:52:30.219907 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:30.219918 | orchestrator | 2026-03-25 05:52:30.219928 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:52:30.219939 | orchestrator | Wednesday 25 March 2026 05:52:26 +0000 (0:00:01.218) 0:44:43.387 ******* 2026-03-25 05:52:30.219950 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:30.219961 | orchestrator | 2026-03-25 05:52:30.219972 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:52:30.219982 | orchestrator | Wednesday 25 March 2026 05:52:27 +0000 (0:00:01.243) 0:44:44.630 ******* 2026-03-25 05:52:30.220027 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:30.220040 | orchestrator | 2026-03-25 05:52:30.220051 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:52:30.220063 | orchestrator | Wednesday 25 March 2026 05:52:28 +0000 (0:00:01.149) 0:44:45.780 ******* 2026-03-25 05:52:30.220073 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:30.220084 | orchestrator | 2026-03-25 05:52:30.220095 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:52:30.220105 | orchestrator | Wednesday 25 March 2026 05:52:29 +0000 (0:00:01.203) 0:44:46.983 ******* 2026-03-25 05:52:30.220117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:52:30.220139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'uuids': ['e67f6cc7-d6f8-4138-9e65-f811c858cad0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI']}})  2026-03-25 05:52:30.223310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82545a3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:52:30.223346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060']}})  2026-03-25 05:52:30.223370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:52:30.223383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:52:30.223395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:52:30.223407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:52:30.223419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X', 'dm-uuid-CRYPT-LUKS2-306c9f3fcb174ac6ad8e271da2bf30e2-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:52:30.223440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:52:30.223462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'uuids': ['306c9f3f-cb17-4ac6-ad8e-271da2bf30e2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X']}})  2026-03-25 05:52:30.223489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269']}})  2026-03-25 05:52:30.223522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:52:30.223568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0ceb4511', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:52:31.627074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:52:31.627179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:52:31.627237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI', 'dm-uuid-CRYPT-LUKS2-e67f6cc7d6f841389e65f811c858cad0-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:52:31.627254 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:31.627267 | orchestrator | 2026-03-25 05:52:31.627279 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:52:31.627291 | orchestrator | Wednesday 25 March 2026 05:52:31 +0000 (0:00:01.416) 0:44:48.400 ******* 2026-03-25 05:52:31.627303 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:31.627316 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'uuids': ['e67f6cc7-d6f8-4138-9e65-f811c858cad0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:31.627330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82545a3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:31.627361 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:31.627390 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:31.627402 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:31.627415 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:31.627426 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:31.627445 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X', 'dm-uuid-CRYPT-LUKS2-306c9f3fcb174ac6ad8e271da2bf30e2-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:37.011757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:37.011913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'uuids': ['306c9f3f-cb17-4ac6-ad8e-271da2bf30e2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:37.011932 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:37.011947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:37.012043 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0ceb4511', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:37.012070 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:37.012082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:37.012094 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI', 'dm-uuid-CRYPT-LUKS2-e67f6cc7d6f841389e65f811c858cad0-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:52:37.012107 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:52:37.012120 | orchestrator | 2026-03-25 05:52:37.012132 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:52:37.012144 | orchestrator | Wednesday 25 March 2026 05:52:32 +0000 (0:00:01.410) 0:44:49.811 ******* 2026-03-25 05:52:37.012155 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:37.012168 | orchestrator | 2026-03-25 05:52:37.012180 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:52:37.012190 | orchestrator | Wednesday 25 March 2026 05:52:34 +0000 (0:00:01.516) 0:44:51.327 ******* 2026-03-25 05:52:37.012201 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:37.012212 | orchestrator | 2026-03-25 05:52:37.012229 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:52:37.012240 | orchestrator | Wednesday 25 March 2026 05:52:35 +0000 (0:00:01.184) 0:44:52.512 ******* 2026-03-25 05:52:37.012251 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:52:37.012262 | orchestrator | 2026-03-25 05:52:37.012273 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:52:37.012291 | orchestrator | Wednesday 25 March 2026 05:52:37 +0000 (0:00:01.505) 0:44:54.018 ******* 2026-03-25 05:53:21.185461 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.185575 | orchestrator | 2026-03-25 05:53:21.185591 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:53:21.185604 | orchestrator | Wednesday 25 March 2026 05:52:38 +0000 (0:00:01.154) 0:44:55.172 ******* 2026-03-25 05:53:21.185614 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.185624 | orchestrator | 2026-03-25 05:53:21.185635 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:53:21.185645 | orchestrator | Wednesday 25 March 2026 05:52:39 +0000 (0:00:01.243) 0:44:56.416 ******* 2026-03-25 05:53:21.185654 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.185664 | orchestrator | 2026-03-25 05:53:21.185674 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:53:21.185683 | orchestrator | Wednesday 25 March 2026 05:52:40 +0000 (0:00:01.171) 0:44:57.587 ******* 2026-03-25 05:53:21.185694 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-25 05:53:21.185704 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-25 05:53:21.185714 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-25 05:53:21.185724 | orchestrator | 2026-03-25 05:53:21.185750 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:53:21.185760 | orchestrator | Wednesday 25 March 2026 05:52:42 +0000 (0:00:02.184) 0:44:59.772 ******* 2026-03-25 05:53:21.185770 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-25 05:53:21.185780 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-25 05:53:21.185790 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-25 05:53:21.185800 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.185810 | orchestrator | 2026-03-25 05:53:21.185819 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:53:21.185829 | orchestrator | Wednesday 25 March 2026 05:52:43 +0000 (0:00:01.169) 0:45:00.942 ******* 2026-03-25 05:53:21.185839 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-25 05:53:21.185850 | orchestrator | 2026-03-25 05:53:21.185860 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:53:21.185872 | orchestrator | Wednesday 25 March 2026 05:52:45 +0000 (0:00:01.135) 0:45:02.077 ******* 2026-03-25 05:53:21.185881 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.185891 | orchestrator | 2026-03-25 05:53:21.185901 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:53:21.185911 | orchestrator | Wednesday 25 March 2026 05:52:46 +0000 (0:00:01.234) 0:45:03.312 ******* 2026-03-25 05:53:21.185921 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.185931 | orchestrator | 2026-03-25 05:53:21.185941 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:53:21.185950 | orchestrator | Wednesday 25 March 2026 05:52:47 +0000 (0:00:01.184) 0:45:04.497 ******* 2026-03-25 05:53:21.185960 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.185991 | orchestrator | 2026-03-25 05:53:21.186003 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:53:21.186014 | orchestrator | Wednesday 25 March 2026 05:52:48 +0000 (0:00:01.198) 0:45:05.696 ******* 2026-03-25 05:53:21.186082 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:53:21.186093 | orchestrator | 2026-03-25 05:53:21.186104 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:53:21.186141 | orchestrator | Wednesday 25 March 2026 05:52:49 +0000 (0:00:01.298) 0:45:06.994 ******* 2026-03-25 05:53:21.186185 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 05:53:21.186197 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 05:53:21.186208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 05:53:21.186219 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.186230 | orchestrator | 2026-03-25 05:53:21.186241 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:53:21.186251 | orchestrator | Wednesday 25 March 2026 05:52:51 +0000 (0:00:01.423) 0:45:08.418 ******* 2026-03-25 05:53:21.186262 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 05:53:21.186273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 05:53:21.186284 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 05:53:21.186295 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.186305 | orchestrator | 2026-03-25 05:53:21.186316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:53:21.186327 | orchestrator | Wednesday 25 March 2026 05:52:52 +0000 (0:00:01.458) 0:45:09.876 ******* 2026-03-25 05:53:21.186337 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 05:53:21.186347 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 05:53:21.186356 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 05:53:21.186366 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.186375 | orchestrator | 2026-03-25 05:53:21.186385 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:53:21.186394 | orchestrator | Wednesday 25 March 2026 05:52:54 +0000 (0:00:01.454) 0:45:11.331 ******* 2026-03-25 05:53:21.186404 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:53:21.186413 | orchestrator | 2026-03-25 05:53:21.186423 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:53:21.186433 | orchestrator | Wednesday 25 March 2026 05:52:55 +0000 (0:00:01.146) 0:45:12.478 ******* 2026-03-25 05:53:21.186442 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 05:53:21.186452 | orchestrator | 2026-03-25 05:53:21.186461 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:53:21.186472 | orchestrator | Wednesday 25 March 2026 05:52:57 +0000 (0:00:01.747) 0:45:14.225 ******* 2026-03-25 05:53:21.186500 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:53:21.186511 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:53:21.186520 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:53:21.186530 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:53:21.186539 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:53:21.186549 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-25 05:53:21.186558 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:53:21.186568 | orchestrator | 2026-03-25 05:53:21.186578 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:53:21.186587 | orchestrator | Wednesday 25 March 2026 05:52:59 +0000 (0:00:02.262) 0:45:16.488 ******* 2026-03-25 05:53:21.186597 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:53:21.186612 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:53:21.186622 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:53:21.186631 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:53:21.186648 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:53:21.186658 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-25 05:53:21.186668 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:53:21.186677 | orchestrator | 2026-03-25 05:53:21.186687 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-25 05:53:21.186696 | orchestrator | Wednesday 25 March 2026 05:53:01 +0000 (0:00:02.281) 0:45:18.770 ******* 2026-03-25 05:53:21.186706 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:53:21.186715 | orchestrator | 2026-03-25 05:53:21.186725 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-25 05:53:21.186735 | orchestrator | Wednesday 25 March 2026 05:53:02 +0000 (0:00:01.134) 0:45:19.905 ******* 2026-03-25 05:53:21.186744 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:53:21.186754 | orchestrator | 2026-03-25 05:53:21.186763 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-25 05:53:21.186773 | orchestrator | Wednesday 25 March 2026 05:53:03 +0000 (0:00:00.814) 0:45:20.720 ******* 2026-03-25 05:53:21.186782 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:53:21.186792 | orchestrator | 2026-03-25 05:53:21.186801 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-25 05:53:21.186811 | orchestrator | Wednesday 25 March 2026 05:53:04 +0000 (0:00:00.894) 0:45:21.614 ******* 2026-03-25 05:53:21.186820 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-25 05:53:21.186830 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-25 05:53:21.186839 | orchestrator | 2026-03-25 05:53:21.186849 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 05:53:21.186859 | orchestrator | Wednesday 25 March 2026 05:53:09 +0000 (0:00:04.861) 0:45:26.476 ******* 2026-03-25 05:53:21.186868 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-25 05:53:21.186878 | orchestrator | 2026-03-25 05:53:21.186888 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 05:53:21.186897 | orchestrator | Wednesday 25 March 2026 05:53:10 +0000 (0:00:01.148) 0:45:27.625 ******* 2026-03-25 05:53:21.186907 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-25 05:53:21.186916 | orchestrator | 2026-03-25 05:53:21.186926 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 05:53:21.186936 | orchestrator | Wednesday 25 March 2026 05:53:11 +0000 (0:00:01.193) 0:45:28.819 ******* 2026-03-25 05:53:21.186945 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.186954 | orchestrator | 2026-03-25 05:53:21.186991 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 05:53:21.187002 | orchestrator | Wednesday 25 March 2026 05:53:12 +0000 (0:00:01.168) 0:45:29.987 ******* 2026-03-25 05:53:21.187011 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:53:21.187021 | orchestrator | 2026-03-25 05:53:21.187031 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 05:53:21.187041 | orchestrator | Wednesday 25 March 2026 05:53:14 +0000 (0:00:01.538) 0:45:31.526 ******* 2026-03-25 05:53:21.187050 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:53:21.187060 | orchestrator | 2026-03-25 05:53:21.187069 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 05:53:21.187079 | orchestrator | Wednesday 25 March 2026 05:53:16 +0000 (0:00:01.580) 0:45:33.107 ******* 2026-03-25 05:53:21.187089 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:53:21.187098 | orchestrator | 2026-03-25 05:53:21.187108 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 05:53:21.187117 | orchestrator | Wednesday 25 March 2026 05:53:17 +0000 (0:00:01.585) 0:45:34.693 ******* 2026-03-25 05:53:21.187127 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.187136 | orchestrator | 2026-03-25 05:53:21.187146 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 05:53:21.187162 | orchestrator | Wednesday 25 March 2026 05:53:18 +0000 (0:00:01.136) 0:45:35.830 ******* 2026-03-25 05:53:21.187171 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.187181 | orchestrator | 2026-03-25 05:53:21.187190 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 05:53:21.187200 | orchestrator | Wednesday 25 March 2026 05:53:19 +0000 (0:00:01.173) 0:45:37.004 ******* 2026-03-25 05:53:21.187210 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:53:21.187219 | orchestrator | 2026-03-25 05:53:21.187235 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 05:54:01.571897 | orchestrator | Wednesday 25 March 2026 05:53:21 +0000 (0:00:01.186) 0:45:38.190 ******* 2026-03-25 05:54:01.572062 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.572087 | orchestrator | 2026-03-25 05:54:01.572106 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 05:54:01.572123 | orchestrator | Wednesday 25 March 2026 05:53:22 +0000 (0:00:01.564) 0:45:39.755 ******* 2026-03-25 05:54:01.572140 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.572157 | orchestrator | 2026-03-25 05:54:01.572175 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 05:54:01.572194 | orchestrator | Wednesday 25 March 2026 05:53:24 +0000 (0:00:01.530) 0:45:41.286 ******* 2026-03-25 05:54:01.572213 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.572233 | orchestrator | 2026-03-25 05:54:01.572253 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 05:54:01.572272 | orchestrator | Wednesday 25 March 2026 05:53:25 +0000 (0:00:00.796) 0:45:42.083 ******* 2026-03-25 05:54:01.572291 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.572309 | orchestrator | 2026-03-25 05:54:01.572351 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 05:54:01.572372 | orchestrator | Wednesday 25 March 2026 05:53:25 +0000 (0:00:00.840) 0:45:42.924 ******* 2026-03-25 05:54:01.572392 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.572412 | orchestrator | 2026-03-25 05:54:01.572429 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 05:54:01.572450 | orchestrator | Wednesday 25 March 2026 05:53:26 +0000 (0:00:00.873) 0:45:43.797 ******* 2026-03-25 05:54:01.572469 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.572487 | orchestrator | 2026-03-25 05:54:01.572505 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 05:54:01.572524 | orchestrator | Wednesday 25 March 2026 05:53:27 +0000 (0:00:00.823) 0:45:44.621 ******* 2026-03-25 05:54:01.572543 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.572563 | orchestrator | 2026-03-25 05:54:01.572583 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 05:54:01.572603 | orchestrator | Wednesday 25 March 2026 05:53:28 +0000 (0:00:00.804) 0:45:45.425 ******* 2026-03-25 05:54:01.572621 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.572639 | orchestrator | 2026-03-25 05:54:01.572658 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 05:54:01.572677 | orchestrator | Wednesday 25 March 2026 05:53:29 +0000 (0:00:00.817) 0:45:46.243 ******* 2026-03-25 05:54:01.572696 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.572715 | orchestrator | 2026-03-25 05:54:01.572734 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 05:54:01.572753 | orchestrator | Wednesday 25 March 2026 05:53:30 +0000 (0:00:00.877) 0:45:47.120 ******* 2026-03-25 05:54:01.572771 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.572790 | orchestrator | 2026-03-25 05:54:01.572808 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 05:54:01.572827 | orchestrator | Wednesday 25 March 2026 05:53:30 +0000 (0:00:00.779) 0:45:47.900 ******* 2026-03-25 05:54:01.572845 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.572864 | orchestrator | 2026-03-25 05:54:01.572884 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 05:54:01.572938 | orchestrator | Wednesday 25 March 2026 05:53:31 +0000 (0:00:00.805) 0:45:48.706 ******* 2026-03-25 05:54:01.573004 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.573025 | orchestrator | 2026-03-25 05:54:01.573043 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 05:54:01.573062 | orchestrator | Wednesday 25 March 2026 05:53:32 +0000 (0:00:00.801) 0:45:49.508 ******* 2026-03-25 05:54:01.573080 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573098 | orchestrator | 2026-03-25 05:54:01.573116 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 05:54:01.573135 | orchestrator | Wednesday 25 March 2026 05:53:33 +0000 (0:00:00.782) 0:45:50.290 ******* 2026-03-25 05:54:01.573154 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573173 | orchestrator | 2026-03-25 05:54:01.573192 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 05:54:01.573211 | orchestrator | Wednesday 25 March 2026 05:53:34 +0000 (0:00:00.816) 0:45:51.107 ******* 2026-03-25 05:54:01.573230 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573249 | orchestrator | 2026-03-25 05:54:01.573269 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 05:54:01.573287 | orchestrator | Wednesday 25 March 2026 05:53:34 +0000 (0:00:00.765) 0:45:51.873 ******* 2026-03-25 05:54:01.573306 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573325 | orchestrator | 2026-03-25 05:54:01.573345 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 05:54:01.573363 | orchestrator | Wednesday 25 March 2026 05:53:35 +0000 (0:00:00.764) 0:45:52.638 ******* 2026-03-25 05:54:01.573383 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573402 | orchestrator | 2026-03-25 05:54:01.573422 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 05:54:01.573442 | orchestrator | Wednesday 25 March 2026 05:53:36 +0000 (0:00:00.774) 0:45:53.412 ******* 2026-03-25 05:54:01.573461 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573481 | orchestrator | 2026-03-25 05:54:01.573499 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 05:54:01.573518 | orchestrator | Wednesday 25 March 2026 05:53:37 +0000 (0:00:00.779) 0:45:54.191 ******* 2026-03-25 05:54:01.573536 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573554 | orchestrator | 2026-03-25 05:54:01.573572 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 05:54:01.573592 | orchestrator | Wednesday 25 March 2026 05:53:37 +0000 (0:00:00.748) 0:45:54.940 ******* 2026-03-25 05:54:01.573612 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573631 | orchestrator | 2026-03-25 05:54:01.573650 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 05:54:01.573668 | orchestrator | Wednesday 25 March 2026 05:53:38 +0000 (0:00:00.765) 0:45:55.705 ******* 2026-03-25 05:54:01.573716 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573736 | orchestrator | 2026-03-25 05:54:01.573755 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 05:54:01.573776 | orchestrator | Wednesday 25 March 2026 05:53:39 +0000 (0:00:00.782) 0:45:56.488 ******* 2026-03-25 05:54:01.573795 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573815 | orchestrator | 2026-03-25 05:54:01.573833 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 05:54:01.573853 | orchestrator | Wednesday 25 March 2026 05:53:40 +0000 (0:00:00.854) 0:45:57.344 ******* 2026-03-25 05:54:01.573873 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.573892 | orchestrator | 2026-03-25 05:54:01.573911 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 05:54:01.573931 | orchestrator | Wednesday 25 March 2026 05:53:41 +0000 (0:00:00.760) 0:45:58.104 ******* 2026-03-25 05:54:01.573989 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.574010 | orchestrator | 2026-03-25 05:54:01.574118 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 05:54:01.574167 | orchestrator | Wednesday 25 March 2026 05:53:41 +0000 (0:00:00.784) 0:45:58.889 ******* 2026-03-25 05:54:01.574187 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.574208 | orchestrator | 2026-03-25 05:54:01.574227 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 05:54:01.574246 | orchestrator | Wednesday 25 March 2026 05:53:43 +0000 (0:00:01.612) 0:46:00.502 ******* 2026-03-25 05:54:01.574266 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.574284 | orchestrator | 2026-03-25 05:54:01.574304 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 05:54:01.574322 | orchestrator | Wednesday 25 March 2026 05:53:45 +0000 (0:00:01.917) 0:46:02.420 ******* 2026-03-25 05:54:01.574341 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-25 05:54:01.574361 | orchestrator | 2026-03-25 05:54:01.574380 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 05:54:01.574400 | orchestrator | Wednesday 25 March 2026 05:53:46 +0000 (0:00:01.137) 0:46:03.557 ******* 2026-03-25 05:54:01.574419 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.574439 | orchestrator | 2026-03-25 05:54:01.574458 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 05:54:01.574478 | orchestrator | Wednesday 25 March 2026 05:53:47 +0000 (0:00:01.229) 0:46:04.786 ******* 2026-03-25 05:54:01.574498 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.574517 | orchestrator | 2026-03-25 05:54:01.574537 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 05:54:01.574555 | orchestrator | Wednesday 25 March 2026 05:53:48 +0000 (0:00:01.148) 0:46:05.935 ******* 2026-03-25 05:54:01.574574 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 05:54:01.574593 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 05:54:01.574612 | orchestrator | 2026-03-25 05:54:01.574632 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 05:54:01.574650 | orchestrator | Wednesday 25 March 2026 05:53:50 +0000 (0:00:01.826) 0:46:07.762 ******* 2026-03-25 05:54:01.574670 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.574690 | orchestrator | 2026-03-25 05:54:01.574709 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 05:54:01.574727 | orchestrator | Wednesday 25 March 2026 05:53:52 +0000 (0:00:01.519) 0:46:09.281 ******* 2026-03-25 05:54:01.574747 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.574766 | orchestrator | 2026-03-25 05:54:01.574785 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 05:54:01.574805 | orchestrator | Wednesday 25 March 2026 05:53:53 +0000 (0:00:01.153) 0:46:10.435 ******* 2026-03-25 05:54:01.574824 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.574843 | orchestrator | 2026-03-25 05:54:01.574864 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 05:54:01.574883 | orchestrator | Wednesday 25 March 2026 05:53:54 +0000 (0:00:00.896) 0:46:11.332 ******* 2026-03-25 05:54:01.574902 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.574922 | orchestrator | 2026-03-25 05:54:01.574941 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 05:54:01.575039 | orchestrator | Wednesday 25 March 2026 05:53:55 +0000 (0:00:00.803) 0:46:12.135 ******* 2026-03-25 05:54:01.575058 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-25 05:54:01.575076 | orchestrator | 2026-03-25 05:54:01.575095 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 05:54:01.575114 | orchestrator | Wednesday 25 March 2026 05:53:56 +0000 (0:00:01.162) 0:46:13.297 ******* 2026-03-25 05:54:01.575131 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:01.575150 | orchestrator | 2026-03-25 05:54:01.575169 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 05:54:01.575203 | orchestrator | Wednesday 25 March 2026 05:53:57 +0000 (0:00:01.718) 0:46:15.016 ******* 2026-03-25 05:54:01.575222 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 05:54:01.575240 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 05:54:01.575257 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 05:54:01.575276 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.575293 | orchestrator | 2026-03-25 05:54:01.575311 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 05:54:01.575329 | orchestrator | Wednesday 25 March 2026 05:53:59 +0000 (0:00:01.211) 0:46:16.227 ******* 2026-03-25 05:54:01.575347 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:01.575365 | orchestrator | 2026-03-25 05:54:01.575383 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 05:54:01.575401 | orchestrator | Wednesday 25 March 2026 05:54:00 +0000 (0:00:01.170) 0:46:17.397 ******* 2026-03-25 05:54:01.575438 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.677512 | orchestrator | 2026-03-25 05:54:44.677631 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 05:54:44.677648 | orchestrator | Wednesday 25 March 2026 05:54:01 +0000 (0:00:01.183) 0:46:18.581 ******* 2026-03-25 05:54:44.677660 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.677672 | orchestrator | 2026-03-25 05:54:44.677684 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 05:54:44.677695 | orchestrator | Wednesday 25 March 2026 05:54:02 +0000 (0:00:01.166) 0:46:19.747 ******* 2026-03-25 05:54:44.677706 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.677717 | orchestrator | 2026-03-25 05:54:44.677728 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 05:54:44.677739 | orchestrator | Wednesday 25 March 2026 05:54:03 +0000 (0:00:01.150) 0:46:20.898 ******* 2026-03-25 05:54:44.677750 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.677761 | orchestrator | 2026-03-25 05:54:44.677772 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 05:54:44.677801 | orchestrator | Wednesday 25 March 2026 05:54:04 +0000 (0:00:00.802) 0:46:21.700 ******* 2026-03-25 05:54:44.677813 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:44.677825 | orchestrator | 2026-03-25 05:54:44.677836 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 05:54:44.677847 | orchestrator | Wednesday 25 March 2026 05:54:06 +0000 (0:00:02.139) 0:46:23.840 ******* 2026-03-25 05:54:44.677858 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:44.677869 | orchestrator | 2026-03-25 05:54:44.677881 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 05:54:44.677892 | orchestrator | Wednesday 25 March 2026 05:54:07 +0000 (0:00:00.787) 0:46:24.628 ******* 2026-03-25 05:54:44.677903 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-25 05:54:44.677914 | orchestrator | 2026-03-25 05:54:44.677925 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 05:54:44.677975 | orchestrator | Wednesday 25 March 2026 05:54:08 +0000 (0:00:01.292) 0:46:25.920 ******* 2026-03-25 05:54:44.677994 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.678011 | orchestrator | 2026-03-25 05:54:44.678113 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 05:54:44.678130 | orchestrator | Wednesday 25 March 2026 05:54:10 +0000 (0:00:01.153) 0:46:27.073 ******* 2026-03-25 05:54:44.678175 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.678188 | orchestrator | 2026-03-25 05:54:44.678201 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 05:54:44.678213 | orchestrator | Wednesday 25 March 2026 05:54:11 +0000 (0:00:01.164) 0:46:28.238 ******* 2026-03-25 05:54:44.678226 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.678260 | orchestrator | 2026-03-25 05:54:44.678273 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 05:54:44.678285 | orchestrator | Wednesday 25 March 2026 05:54:12 +0000 (0:00:01.161) 0:46:29.400 ******* 2026-03-25 05:54:44.678297 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.678309 | orchestrator | 2026-03-25 05:54:44.678321 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 05:54:44.678334 | orchestrator | Wednesday 25 March 2026 05:54:13 +0000 (0:00:01.251) 0:46:30.651 ******* 2026-03-25 05:54:44.678346 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.678358 | orchestrator | 2026-03-25 05:54:44.678370 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 05:54:44.678382 | orchestrator | Wednesday 25 March 2026 05:54:14 +0000 (0:00:01.220) 0:46:31.871 ******* 2026-03-25 05:54:44.678394 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.678406 | orchestrator | 2026-03-25 05:54:44.678419 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 05:54:44.678431 | orchestrator | Wednesday 25 March 2026 05:54:16 +0000 (0:00:01.158) 0:46:33.029 ******* 2026-03-25 05:54:44.678441 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.678452 | orchestrator | 2026-03-25 05:54:44.678462 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 05:54:44.678473 | orchestrator | Wednesday 25 March 2026 05:54:17 +0000 (0:00:01.184) 0:46:34.214 ******* 2026-03-25 05:54:44.678484 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.678494 | orchestrator | 2026-03-25 05:54:44.678505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 05:54:44.678515 | orchestrator | Wednesday 25 March 2026 05:54:18 +0000 (0:00:01.182) 0:46:35.396 ******* 2026-03-25 05:54:44.678526 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:44.678537 | orchestrator | 2026-03-25 05:54:44.678547 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 05:54:44.678558 | orchestrator | Wednesday 25 March 2026 05:54:19 +0000 (0:00:00.805) 0:46:36.202 ******* 2026-03-25 05:54:44.678568 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-25 05:54:44.678580 | orchestrator | 2026-03-25 05:54:44.678591 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 05:54:44.678601 | orchestrator | Wednesday 25 March 2026 05:54:20 +0000 (0:00:01.257) 0:46:37.460 ******* 2026-03-25 05:54:44.678612 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-25 05:54:44.678623 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-25 05:54:44.678633 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-25 05:54:44.678644 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-25 05:54:44.678654 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-25 05:54:44.678665 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-25 05:54:44.678676 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-25 05:54:44.678686 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-25 05:54:44.678697 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 05:54:44.678727 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 05:54:44.678738 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 05:54:44.678749 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 05:54:44.678760 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 05:54:44.678770 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 05:54:44.678781 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-25 05:54:44.678791 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-25 05:54:44.678801 | orchestrator | 2026-03-25 05:54:44.678812 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 05:54:44.678831 | orchestrator | Wednesday 25 March 2026 05:54:26 +0000 (0:00:06.244) 0:46:43.704 ******* 2026-03-25 05:54:44.678842 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-25 05:54:44.678852 | orchestrator | 2026-03-25 05:54:44.678870 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-25 05:54:44.678881 | orchestrator | Wednesday 25 March 2026 05:54:27 +0000 (0:00:01.135) 0:46:44.840 ******* 2026-03-25 05:54:44.678892 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 05:54:44.678903 | orchestrator | 2026-03-25 05:54:44.678914 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-25 05:54:44.678924 | orchestrator | Wednesday 25 March 2026 05:54:29 +0000 (0:00:01.476) 0:46:46.316 ******* 2026-03-25 05:54:44.678966 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 05:54:44.678978 | orchestrator | 2026-03-25 05:54:44.678988 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 05:54:44.678999 | orchestrator | Wednesday 25 March 2026 05:54:30 +0000 (0:00:01.621) 0:46:47.938 ******* 2026-03-25 05:54:44.679009 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679020 | orchestrator | 2026-03-25 05:54:44.679033 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 05:54:44.679138 | orchestrator | Wednesday 25 March 2026 05:54:31 +0000 (0:00:00.866) 0:46:48.804 ******* 2026-03-25 05:54:44.679162 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679174 | orchestrator | 2026-03-25 05:54:44.679184 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 05:54:44.679195 | orchestrator | Wednesday 25 March 2026 05:54:32 +0000 (0:00:00.790) 0:46:49.595 ******* 2026-03-25 05:54:44.679206 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679217 | orchestrator | 2026-03-25 05:54:44.679228 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 05:54:44.679238 | orchestrator | Wednesday 25 March 2026 05:54:33 +0000 (0:00:00.789) 0:46:50.385 ******* 2026-03-25 05:54:44.679249 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679259 | orchestrator | 2026-03-25 05:54:44.679270 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 05:54:44.679281 | orchestrator | Wednesday 25 March 2026 05:54:34 +0000 (0:00:00.776) 0:46:51.162 ******* 2026-03-25 05:54:44.679291 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679302 | orchestrator | 2026-03-25 05:54:44.679313 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 05:54:44.679323 | orchestrator | Wednesday 25 March 2026 05:54:34 +0000 (0:00:00.800) 0:46:51.962 ******* 2026-03-25 05:54:44.679334 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679344 | orchestrator | 2026-03-25 05:54:44.679355 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 05:54:44.679366 | orchestrator | Wednesday 25 March 2026 05:54:35 +0000 (0:00:00.809) 0:46:52.771 ******* 2026-03-25 05:54:44.679376 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679387 | orchestrator | 2026-03-25 05:54:44.679398 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 05:54:44.679408 | orchestrator | Wednesday 25 March 2026 05:54:36 +0000 (0:00:00.777) 0:46:53.549 ******* 2026-03-25 05:54:44.679419 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679430 | orchestrator | 2026-03-25 05:54:44.679440 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 05:54:44.679451 | orchestrator | Wednesday 25 March 2026 05:54:37 +0000 (0:00:00.849) 0:46:54.398 ******* 2026-03-25 05:54:44.679461 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679483 | orchestrator | 2026-03-25 05:54:44.679494 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 05:54:44.679504 | orchestrator | Wednesday 25 March 2026 05:54:38 +0000 (0:00:00.811) 0:46:55.210 ******* 2026-03-25 05:54:44.679515 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:54:44.679526 | orchestrator | 2026-03-25 05:54:44.679536 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 05:54:44.679547 | orchestrator | Wednesday 25 March 2026 05:54:38 +0000 (0:00:00.793) 0:46:56.004 ******* 2026-03-25 05:54:44.679557 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:54:44.679568 | orchestrator | 2026-03-25 05:54:44.679579 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 05:54:44.679589 | orchestrator | Wednesday 25 March 2026 05:54:39 +0000 (0:00:00.833) 0:46:56.838 ******* 2026-03-25 05:54:44.679600 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-25 05:54:44.679610 | orchestrator | 2026-03-25 05:54:44.679621 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 05:54:44.679631 | orchestrator | Wednesday 25 March 2026 05:54:43 +0000 (0:00:04.020) 0:47:00.858 ******* 2026-03-25 05:54:44.679652 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 05:55:26.796801 | orchestrator | 2026-03-25 05:55:26.796970 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 05:55:26.796991 | orchestrator | Wednesday 25 March 2026 05:54:44 +0000 (0:00:00.826) 0:47:01.684 ******* 2026-03-25 05:55:26.797005 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-25 05:55:26.797036 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-25 05:55:26.797049 | orchestrator | 2026-03-25 05:55:26.797060 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 05:55:26.797072 | orchestrator | Wednesday 25 March 2026 05:54:51 +0000 (0:00:07.262) 0:47:08.947 ******* 2026-03-25 05:55:26.797083 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.797095 | orchestrator | 2026-03-25 05:55:26.797108 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 05:55:26.797119 | orchestrator | Wednesday 25 March 2026 05:54:52 +0000 (0:00:00.778) 0:47:09.726 ******* 2026-03-25 05:55:26.797131 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.797142 | orchestrator | 2026-03-25 05:55:26.797154 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 05:55:26.797167 | orchestrator | Wednesday 25 March 2026 05:54:53 +0000 (0:00:00.812) 0:47:10.538 ******* 2026-03-25 05:55:26.797178 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.797188 | orchestrator | 2026-03-25 05:55:26.797200 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 05:55:26.797211 | orchestrator | Wednesday 25 March 2026 05:54:54 +0000 (0:00:00.837) 0:47:11.376 ******* 2026-03-25 05:55:26.797223 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.797234 | orchestrator | 2026-03-25 05:55:26.797245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 05:55:26.797255 | orchestrator | Wednesday 25 March 2026 05:54:55 +0000 (0:00:00.857) 0:47:12.233 ******* 2026-03-25 05:55:26.797266 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.797278 | orchestrator | 2026-03-25 05:55:26.797289 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 05:55:26.797323 | orchestrator | Wednesday 25 March 2026 05:54:56 +0000 (0:00:00.844) 0:47:13.078 ******* 2026-03-25 05:55:26.797336 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:55:26.797349 | orchestrator | 2026-03-25 05:55:26.797360 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 05:55:26.797373 | orchestrator | Wednesday 25 March 2026 05:54:56 +0000 (0:00:00.919) 0:47:13.998 ******* 2026-03-25 05:55:26.797385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 05:55:26.797398 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 05:55:26.797410 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 05:55:26.797421 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.797434 | orchestrator | 2026-03-25 05:55:26.797446 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 05:55:26.797457 | orchestrator | Wednesday 25 March 2026 05:54:58 +0000 (0:00:01.451) 0:47:15.449 ******* 2026-03-25 05:55:26.797469 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 05:55:26.797481 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 05:55:26.797492 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 05:55:26.797505 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.797516 | orchestrator | 2026-03-25 05:55:26.797527 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 05:55:26.797539 | orchestrator | Wednesday 25 March 2026 05:54:59 +0000 (0:00:01.543) 0:47:16.993 ******* 2026-03-25 05:55:26.797550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 05:55:26.797563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 05:55:26.797574 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 05:55:26.797586 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.797598 | orchestrator | 2026-03-25 05:55:26.797611 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 05:55:26.797623 | orchestrator | Wednesday 25 March 2026 05:55:01 +0000 (0:00:01.144) 0:47:18.137 ******* 2026-03-25 05:55:26.797635 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:55:26.797647 | orchestrator | 2026-03-25 05:55:26.797659 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 05:55:26.797671 | orchestrator | Wednesday 25 March 2026 05:55:01 +0000 (0:00:00.866) 0:47:19.004 ******* 2026-03-25 05:55:26.797683 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 05:55:26.797695 | orchestrator | 2026-03-25 05:55:26.797706 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 05:55:26.797717 | orchestrator | Wednesday 25 March 2026 05:55:03 +0000 (0:00:01.065) 0:47:20.070 ******* 2026-03-25 05:55:26.797729 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:55:26.797740 | orchestrator | 2026-03-25 05:55:26.797751 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-25 05:55:26.797761 | orchestrator | Wednesday 25 March 2026 05:55:04 +0000 (0:00:01.420) 0:47:21.491 ******* 2026-03-25 05:55:26.797772 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:55:26.797783 | orchestrator | 2026-03-25 05:55:26.797816 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-25 05:55:26.797828 | orchestrator | Wednesday 25 March 2026 05:55:05 +0000 (0:00:00.789) 0:47:22.280 ******* 2026-03-25 05:55:26.797839 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:55:26.797851 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:55:26.797863 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:55:26.797874 | orchestrator | 2026-03-25 05:55:26.797885 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-25 05:55:26.797896 | orchestrator | Wednesday 25 March 2026 05:55:06 +0000 (0:00:01.678) 0:47:23.958 ******* 2026-03-25 05:55:26.797940 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-03-25 05:55:26.797951 | orchestrator | 2026-03-25 05:55:26.797969 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-25 05:55:26.797978 | orchestrator | Wednesday 25 March 2026 05:55:08 +0000 (0:00:01.208) 0:47:25.167 ******* 2026-03-25 05:55:26.797988 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.797998 | orchestrator | 2026-03-25 05:55:26.798008 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-25 05:55:26.798074 | orchestrator | Wednesday 25 March 2026 05:55:09 +0000 (0:00:01.138) 0:47:26.306 ******* 2026-03-25 05:55:26.798086 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.798097 | orchestrator | 2026-03-25 05:55:26.798107 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-25 05:55:26.798117 | orchestrator | Wednesday 25 March 2026 05:55:10 +0000 (0:00:01.142) 0:47:27.448 ******* 2026-03-25 05:55:26.798128 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:55:26.798138 | orchestrator | 2026-03-25 05:55:26.798180 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-25 05:55:26.798191 | orchestrator | Wednesday 25 March 2026 05:55:11 +0000 (0:00:01.531) 0:47:28.980 ******* 2026-03-25 05:55:26.798202 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:55:26.798211 | orchestrator | 2026-03-25 05:55:26.798222 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-25 05:55:26.798232 | orchestrator | Wednesday 25 March 2026 05:55:13 +0000 (0:00:01.174) 0:47:30.155 ******* 2026-03-25 05:55:26.798243 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-25 05:55:26.798253 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-25 05:55:26.798264 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-25 05:55:26.798274 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-25 05:55:26.798284 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-25 05:55:26.798294 | orchestrator | 2026-03-25 05:55:26.798304 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-25 05:55:26.798315 | orchestrator | Wednesday 25 March 2026 05:55:15 +0000 (0:00:02.501) 0:47:32.656 ******* 2026-03-25 05:55:26.798325 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.798336 | orchestrator | 2026-03-25 05:55:26.798346 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-25 05:55:26.798357 | orchestrator | Wednesday 25 March 2026 05:55:16 +0000 (0:00:00.767) 0:47:33.424 ******* 2026-03-25 05:55:26.798368 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-03-25 05:55:26.798378 | orchestrator | 2026-03-25 05:55:26.798388 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-25 05:55:26.798398 | orchestrator | Wednesday 25 March 2026 05:55:17 +0000 (0:00:01.213) 0:47:34.637 ******* 2026-03-25 05:55:26.798408 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-25 05:55:26.798418 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-25 05:55:26.798429 | orchestrator | 2026-03-25 05:55:26.798439 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-25 05:55:26.798449 | orchestrator | Wednesday 25 March 2026 05:55:19 +0000 (0:00:01.824) 0:47:36.461 ******* 2026-03-25 05:55:26.798460 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 05:55:26.798470 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-25 05:55:26.798480 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 05:55:26.798490 | orchestrator | 2026-03-25 05:55:26.798498 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-25 05:55:26.798507 | orchestrator | Wednesday 25 March 2026 05:55:22 +0000 (0:00:03.153) 0:47:39.615 ******* 2026-03-25 05:55:26.798526 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-25 05:55:26.798535 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-25 05:55:26.798544 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:55:26.798552 | orchestrator | 2026-03-25 05:55:26.798558 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-25 05:55:26.798563 | orchestrator | Wednesday 25 March 2026 05:55:24 +0000 (0:00:01.615) 0:47:41.231 ******* 2026-03-25 05:55:26.798568 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.798574 | orchestrator | 2026-03-25 05:55:26.798579 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-25 05:55:26.798585 | orchestrator | Wednesday 25 March 2026 05:55:25 +0000 (0:00:00.921) 0:47:42.153 ******* 2026-03-25 05:55:26.798590 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.798595 | orchestrator | 2026-03-25 05:55:26.798601 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-25 05:55:26.798606 | orchestrator | Wednesday 25 March 2026 05:55:25 +0000 (0:00:00.807) 0:47:42.961 ******* 2026-03-25 05:55:26.798612 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:55:26.798617 | orchestrator | 2026-03-25 05:55:26.798632 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-25 05:57:47.700770 | orchestrator | Wednesday 25 March 2026 05:55:26 +0000 (0:00:00.839) 0:47:43.800 ******* 2026-03-25 05:57:47.700963 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-03-25 05:57:47.700983 | orchestrator | 2026-03-25 05:57:47.700996 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-25 05:57:47.701007 | orchestrator | Wednesday 25 March 2026 05:55:28 +0000 (0:00:01.264) 0:47:45.065 ******* 2026-03-25 05:57:47.701018 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:57:47.701030 | orchestrator | 2026-03-25 05:57:47.701041 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-25 05:57:47.701052 | orchestrator | Wednesday 25 March 2026 05:55:30 +0000 (0:00:02.523) 0:47:47.589 ******* 2026-03-25 05:57:47.701063 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:57:47.701074 | orchestrator | 2026-03-25 05:57:47.701084 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-25 05:57:47.701112 | orchestrator | Wednesday 25 March 2026 05:55:33 +0000 (0:00:03.381) 0:47:50.970 ******* 2026-03-25 05:57:47.701124 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-03-25 05:57:47.701134 | orchestrator | 2026-03-25 05:57:47.701145 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-25 05:57:47.701156 | orchestrator | Wednesday 25 March 2026 05:55:35 +0000 (0:00:01.142) 0:47:52.113 ******* 2026-03-25 05:57:47.701167 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:57:47.701177 | orchestrator | 2026-03-25 05:57:47.701188 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-25 05:57:47.701199 | orchestrator | Wednesday 25 March 2026 05:55:37 +0000 (0:00:02.069) 0:47:54.183 ******* 2026-03-25 05:57:47.701210 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:57:47.701220 | orchestrator | 2026-03-25 05:57:47.701231 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-25 05:57:47.701242 | orchestrator | Wednesday 25 March 2026 05:55:39 +0000 (0:00:01.978) 0:47:56.162 ******* 2026-03-25 05:57:47.701252 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:57:47.701263 | orchestrator | 2026-03-25 05:57:47.701274 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-25 05:57:47.701284 | orchestrator | Wednesday 25 March 2026 05:55:41 +0000 (0:00:02.191) 0:47:58.353 ******* 2026-03-25 05:57:47.701295 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:57:47.701307 | orchestrator | 2026-03-25 05:57:47.701318 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-25 05:57:47.701329 | orchestrator | Wednesday 25 March 2026 05:55:42 +0000 (0:00:01.158) 0:47:59.511 ******* 2026-03-25 05:57:47.701362 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:57:47.701373 | orchestrator | 2026-03-25 05:57:47.701384 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-25 05:57:47.701394 | orchestrator | Wednesday 25 March 2026 05:55:43 +0000 (0:00:01.132) 0:48:00.644 ******* 2026-03-25 05:57:47.701405 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-25 05:57:47.701415 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-25 05:57:47.701427 | orchestrator | 2026-03-25 05:57:47.701437 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-25 05:57:47.701448 | orchestrator | Wednesday 25 March 2026 05:55:45 +0000 (0:00:01.903) 0:48:02.548 ******* 2026-03-25 05:57:47.701459 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-25 05:57:47.701470 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-25 05:57:47.701480 | orchestrator | 2026-03-25 05:57:47.701491 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-25 05:57:47.701502 | orchestrator | Wednesday 25 March 2026 05:55:48 +0000 (0:00:02.931) 0:48:05.479 ******* 2026-03-25 05:57:47.701512 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-25 05:57:47.701523 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-25 05:57:47.701534 | orchestrator | 2026-03-25 05:57:47.701545 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-25 05:57:47.701555 | orchestrator | Wednesday 25 March 2026 05:55:52 +0000 (0:00:04.229) 0:48:09.709 ******* 2026-03-25 05:57:47.701566 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:57:47.701577 | orchestrator | 2026-03-25 05:57:47.701587 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-25 05:57:47.701598 | orchestrator | Wednesday 25 March 2026 05:55:54 +0000 (0:00:01.424) 0:48:11.134 ******* 2026-03-25 05:57:47.701609 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-25 05:57:47.701621 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:57:47.701632 | orchestrator | 2026-03-25 05:57:47.701647 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-25 05:57:47.701658 | orchestrator | Wednesday 25 March 2026 05:56:06 +0000 (0:00:12.836) 0:48:23.970 ******* 2026-03-25 05:57:47.701669 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:57:47.701679 | orchestrator | 2026-03-25 05:57:47.701690 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-25 05:57:47.701701 | orchestrator | Wednesday 25 March 2026 05:56:07 +0000 (0:00:00.891) 0:48:24.862 ******* 2026-03-25 05:57:47.701712 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:57:47.701722 | orchestrator | 2026-03-25 05:57:47.701733 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-25 05:57:47.701744 | orchestrator | Wednesday 25 March 2026 05:56:08 +0000 (0:00:00.780) 0:48:25.642 ******* 2026-03-25 05:57:47.701754 | orchestrator | skipping: [testbed-node-5] 2026-03-25 05:57:47.701765 | orchestrator | 2026-03-25 05:57:47.701776 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-25 05:57:47.701787 | orchestrator | Wednesday 25 March 2026 05:56:09 +0000 (0:00:00.778) 0:48:26.420 ******* 2026-03-25 05:57:47.701797 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:57:47.701808 | orchestrator | 2026-03-25 05:57:47.701819 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-03-25 05:57:47.701829 | orchestrator | 2026-03-25 05:57:47.701857 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:57:47.701869 | orchestrator | Wednesday 25 March 2026 05:56:12 +0000 (0:00:02.617) 0:48:29.037 ******* 2026-03-25 05:57:47.701897 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:57:47.701909 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:57:47.701919 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:57:47.701930 | orchestrator | 2026-03-25 05:57:47.701941 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:57:47.701959 | orchestrator | Wednesday 25 March 2026 05:56:13 +0000 (0:00:01.699) 0:48:30.737 ******* 2026-03-25 05:57:47.701970 | orchestrator | ok: [testbed-node-3] 2026-03-25 05:57:47.701981 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:57:47.701992 | orchestrator | ok: [testbed-node-5] 2026-03-25 05:57:47.702002 | orchestrator | 2026-03-25 05:57:47.702013 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-03-25 05:57:47.702090 | orchestrator | Wednesday 25 March 2026 05:56:15 +0000 (0:00:01.960) 0:48:32.698 ******* 2026-03-25 05:57:47.702108 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-25 05:57:47.702119 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-25 05:57:47.702130 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-25 05:57:47.702141 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-25 05:57:47.702154 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-25 05:57:47.702164 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-25 05:57:47.702175 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-25 05:57:47.702186 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-25 05:57:47.702196 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-25 05:57:47.702207 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-25 05:57:47.702218 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-25 05:57:47.702229 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-25 05:57:47.702240 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-25 05:57:47.702250 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-25 05:57:47.702261 | orchestrator | 2026-03-25 05:57:47.702272 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-03-25 05:57:47.702282 | orchestrator | Wednesday 25 March 2026 05:57:30 +0000 (0:01:15.223) 0:49:47.922 ******* 2026-03-25 05:57:47.702293 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-25 05:57:47.702304 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-25 05:57:47.702314 | orchestrator | 2026-03-25 05:57:47.702325 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-03-25 05:57:47.702336 | orchestrator | Wednesday 25 March 2026 05:57:36 +0000 (0:00:05.944) 0:49:53.867 ******* 2026-03-25 05:57:47.702346 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:57:47.702357 | orchestrator | 2026-03-25 05:57:47.702368 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-03-25 05:57:47.702378 | orchestrator | 2026-03-25 05:57:47.702389 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:57:47.702400 | orchestrator | Wednesday 25 March 2026 05:57:40 +0000 (0:00:03.197) 0:49:57.064 ******* 2026-03-25 05:57:47.702410 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-25 05:57:47.702421 | orchestrator | 2026-03-25 05:57:47.702432 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:57:47.702443 | orchestrator | Wednesday 25 March 2026 05:57:41 +0000 (0:00:01.130) 0:49:58.195 ******* 2026-03-25 05:57:47.702453 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:57:47.702473 | orchestrator | 2026-03-25 05:57:47.702483 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:57:47.702494 | orchestrator | Wednesday 25 March 2026 05:57:42 +0000 (0:00:01.456) 0:49:59.652 ******* 2026-03-25 05:57:47.702505 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:57:47.702516 | orchestrator | 2026-03-25 05:57:47.702526 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:57:47.702537 | orchestrator | Wednesday 25 March 2026 05:57:43 +0000 (0:00:01.206) 0:50:00.859 ******* 2026-03-25 05:57:47.702549 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:57:47.702568 | orchestrator | 2026-03-25 05:57:47.702588 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:57:47.702607 | orchestrator | Wednesday 25 March 2026 05:57:45 +0000 (0:00:01.577) 0:50:02.436 ******* 2026-03-25 05:57:47.702625 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:57:47.702645 | orchestrator | 2026-03-25 05:57:47.702665 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:57:47.702684 | orchestrator | Wednesday 25 March 2026 05:57:46 +0000 (0:00:01.129) 0:50:03.566 ******* 2026-03-25 05:57:47.702703 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:57:47.702725 | orchestrator | 2026-03-25 05:57:47.702746 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:57:47.702779 | orchestrator | Wednesday 25 March 2026 05:57:47 +0000 (0:00:01.139) 0:50:04.706 ******* 2026-03-25 05:58:13.168709 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:13.168820 | orchestrator | 2026-03-25 05:58:13.168835 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:58:13.168847 | orchestrator | Wednesday 25 March 2026 05:57:48 +0000 (0:00:01.160) 0:50:05.866 ******* 2026-03-25 05:58:13.168858 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:13.168923 | orchestrator | 2026-03-25 05:58:13.168935 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:58:13.168945 | orchestrator | Wednesday 25 March 2026 05:57:50 +0000 (0:00:01.207) 0:50:07.073 ******* 2026-03-25 05:58:13.168955 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:13.168965 | orchestrator | 2026-03-25 05:58:13.168974 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:58:13.168984 | orchestrator | Wednesday 25 March 2026 05:57:51 +0000 (0:00:01.166) 0:50:08.240 ******* 2026-03-25 05:58:13.169011 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:58:13.169021 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:58:13.169031 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:58:13.169040 | orchestrator | 2026-03-25 05:58:13.169050 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:58:13.169059 | orchestrator | Wednesday 25 March 2026 05:57:52 +0000 (0:00:01.709) 0:50:09.949 ******* 2026-03-25 05:58:13.169069 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:13.169078 | orchestrator | 2026-03-25 05:58:13.169088 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:58:13.169097 | orchestrator | Wednesday 25 March 2026 05:57:54 +0000 (0:00:01.308) 0:50:11.257 ******* 2026-03-25 05:58:13.169107 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:58:13.169117 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:58:13.169126 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:58:13.169136 | orchestrator | 2026-03-25 05:58:13.169145 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:58:13.169158 | orchestrator | Wednesday 25 March 2026 05:57:57 +0000 (0:00:03.234) 0:50:14.492 ******* 2026-03-25 05:58:13.169175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:58:13.169193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:58:13.169235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:58:13.169254 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:13.169271 | orchestrator | 2026-03-25 05:58:13.169289 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:58:13.169306 | orchestrator | Wednesday 25 March 2026 05:57:58 +0000 (0:00:01.465) 0:50:15.958 ******* 2026-03-25 05:58:13.169324 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:58:13.169345 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:58:13.169365 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:58:13.169383 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:13.169401 | orchestrator | 2026-03-25 05:58:13.169419 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:58:13.169438 | orchestrator | Wednesday 25 March 2026 05:58:00 +0000 (0:00:02.018) 0:50:17.976 ******* 2026-03-25 05:58:13.169460 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:13.169481 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:13.169513 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:13.169525 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:13.169536 | orchestrator | 2026-03-25 05:58:13.169548 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:58:13.169559 | orchestrator | Wednesday 25 March 2026 05:58:02 +0000 (0:00:01.202) 0:50:19.180 ******* 2026-03-25 05:58:13.169579 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:57:54.779330', 'end': '2026-03-25 05:57:54.817433', 'delta': '0:00:00.038103', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:58:13.169593 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:57:55.329070', 'end': '2026-03-25 05:57:55.373676', 'delta': '0:00:00.044606', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:58:13.169613 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:57:56.231850', 'end': '2026-03-25 05:57:56.280860', 'delta': '0:00:00.049010', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:58:13.169623 | orchestrator | 2026-03-25 05:58:13.169633 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:58:13.169643 | orchestrator | Wednesday 25 March 2026 05:58:03 +0000 (0:00:01.269) 0:50:20.449 ******* 2026-03-25 05:58:13.169652 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:13.169662 | orchestrator | 2026-03-25 05:58:13.169672 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:58:13.169681 | orchestrator | Wednesday 25 March 2026 05:58:05 +0000 (0:00:01.710) 0:50:22.160 ******* 2026-03-25 05:58:13.169691 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:13.169701 | orchestrator | 2026-03-25 05:58:13.169710 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:58:13.169720 | orchestrator | Wednesday 25 March 2026 05:58:06 +0000 (0:00:01.267) 0:50:23.427 ******* 2026-03-25 05:58:13.169730 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:13.169739 | orchestrator | 2026-03-25 05:58:13.169749 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:58:13.169758 | orchestrator | Wednesday 25 March 2026 05:58:07 +0000 (0:00:01.178) 0:50:24.605 ******* 2026-03-25 05:58:13.169768 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:13.169778 | orchestrator | 2026-03-25 05:58:13.169787 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:58:13.169797 | orchestrator | Wednesday 25 March 2026 05:58:09 +0000 (0:00:02.017) 0:50:26.623 ******* 2026-03-25 05:58:13.169806 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:13.169816 | orchestrator | 2026-03-25 05:58:13.169825 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:58:13.169835 | orchestrator | Wednesday 25 March 2026 05:58:10 +0000 (0:00:01.138) 0:50:27.762 ******* 2026-03-25 05:58:13.169844 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:13.169853 | orchestrator | 2026-03-25 05:58:13.169863 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:58:13.169895 | orchestrator | Wednesday 25 March 2026 05:58:11 +0000 (0:00:01.134) 0:50:28.897 ******* 2026-03-25 05:58:13.169905 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:13.169915 | orchestrator | 2026-03-25 05:58:13.169924 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:58:13.169940 | orchestrator | Wednesday 25 March 2026 05:58:13 +0000 (0:00:01.275) 0:50:30.172 ******* 2026-03-25 05:58:24.017500 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:24.017605 | orchestrator | 2026-03-25 05:58:24.017619 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:58:24.017650 | orchestrator | Wednesday 25 March 2026 05:58:14 +0000 (0:00:01.136) 0:50:31.309 ******* 2026-03-25 05:58:24.017660 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:24.017669 | orchestrator | 2026-03-25 05:58:24.017678 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:58:24.017686 | orchestrator | Wednesday 25 March 2026 05:58:15 +0000 (0:00:01.200) 0:50:32.509 ******* 2026-03-25 05:58:24.017695 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:24.017704 | orchestrator | 2026-03-25 05:58:24.017713 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:58:24.017734 | orchestrator | Wednesday 25 March 2026 05:58:16 +0000 (0:00:01.149) 0:50:33.658 ******* 2026-03-25 05:58:24.017743 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:24.017753 | orchestrator | 2026-03-25 05:58:24.017762 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:58:24.017770 | orchestrator | Wednesday 25 March 2026 05:58:17 +0000 (0:00:01.200) 0:50:34.859 ******* 2026-03-25 05:58:24.017779 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:24.017788 | orchestrator | 2026-03-25 05:58:24.017796 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:58:24.017805 | orchestrator | Wednesday 25 March 2026 05:58:18 +0000 (0:00:01.123) 0:50:35.983 ******* 2026-03-25 05:58:24.017814 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:24.017822 | orchestrator | 2026-03-25 05:58:24.017831 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:58:24.017840 | orchestrator | Wednesday 25 March 2026 05:58:20 +0000 (0:00:01.164) 0:50:37.148 ******* 2026-03-25 05:58:24.017849 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:24.017857 | orchestrator | 2026-03-25 05:58:24.017921 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:58:24.017932 | orchestrator | Wednesday 25 March 2026 05:58:21 +0000 (0:00:01.199) 0:50:38.347 ******* 2026-03-25 05:58:24.017943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:58:24.017955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:58:24.017965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:58:24.017976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:58:24.017988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:58:24.018072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:58:24.018087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:58:24.018109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '225bc811', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:58:24.018124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:58:24.018134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:58:24.018156 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:24.018166 | orchestrator | 2026-03-25 05:58:24.018176 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:58:24.018186 | orchestrator | Wednesday 25 March 2026 05:58:22 +0000 (0:00:01.341) 0:50:39.689 ******* 2026-03-25 05:58:24.018204 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145118 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145241 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145253 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145264 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145300 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145343 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '225bc811', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1', 'scsi-SQEMU_QEMU_HARDDISK_225bc811-b117-4ab1-9890-e393d3b780be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145359 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145379 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:58:28.145391 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:58:28.145405 | orchestrator | 2026-03-25 05:58:28.145417 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 05:58:28.145429 | orchestrator | Wednesday 25 March 2026 05:58:24 +0000 (0:00:01.339) 0:50:41.028 ******* 2026-03-25 05:58:28.145440 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:28.145451 | orchestrator | 2026-03-25 05:58:28.145462 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 05:58:28.145472 | orchestrator | Wednesday 25 March 2026 05:58:25 +0000 (0:00:01.530) 0:50:42.559 ******* 2026-03-25 05:58:28.145483 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:28.145493 | orchestrator | 2026-03-25 05:58:28.145504 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:58:28.145514 | orchestrator | Wednesday 25 March 2026 05:58:26 +0000 (0:00:01.130) 0:50:43.689 ******* 2026-03-25 05:58:28.145525 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:58:28.145536 | orchestrator | 2026-03-25 05:58:28.145547 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:58:28.145564 | orchestrator | Wednesday 25 March 2026 05:58:28 +0000 (0:00:01.461) 0:50:45.151 ******* 2026-03-25 05:59:23.981467 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:59:23.981577 | orchestrator | 2026-03-25 05:59:23.981593 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 05:59:23.981605 | orchestrator | Wednesday 25 March 2026 05:58:29 +0000 (0:00:01.158) 0:50:46.310 ******* 2026-03-25 05:59:23.981616 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:59:23.981627 | orchestrator | 2026-03-25 05:59:23.981638 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 05:59:23.981649 | orchestrator | Wednesday 25 March 2026 05:58:30 +0000 (0:00:01.251) 0:50:47.561 ******* 2026-03-25 05:59:23.981673 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:59:23.981684 | orchestrator | 2026-03-25 05:59:23.981695 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 05:59:23.981706 | orchestrator | Wednesday 25 March 2026 05:58:31 +0000 (0:00:01.169) 0:50:48.730 ******* 2026-03-25 05:59:23.981717 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:59:23.981728 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-25 05:59:23.981738 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-25 05:59:23.981749 | orchestrator | 2026-03-25 05:59:23.981759 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 05:59:23.981770 | orchestrator | Wednesday 25 March 2026 05:58:33 +0000 (0:00:02.126) 0:50:50.857 ******* 2026-03-25 05:59:23.981796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-25 05:59:23.981808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-25 05:59:23.981819 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-25 05:59:23.981829 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:59:23.981840 | orchestrator | 2026-03-25 05:59:23.981850 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 05:59:23.981909 | orchestrator | Wednesday 25 March 2026 05:58:35 +0000 (0:00:01.213) 0:50:52.070 ******* 2026-03-25 05:59:23.981920 | orchestrator | skipping: [testbed-node-0] 2026-03-25 05:59:23.981953 | orchestrator | 2026-03-25 05:59:23.981964 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 05:59:23.981975 | orchestrator | Wednesday 25 March 2026 05:58:36 +0000 (0:00:01.196) 0:50:53.266 ******* 2026-03-25 05:59:23.981986 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:59:23.981997 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:59:23.982009 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:59:23.982080 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:59:23.982093 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:59:23.982105 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:59:23.982117 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:59:23.982129 | orchestrator | 2026-03-25 05:59:23.982142 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 05:59:23.982154 | orchestrator | Wednesday 25 March 2026 05:58:38 +0000 (0:00:02.278) 0:50:55.545 ******* 2026-03-25 05:59:23.982167 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-25 05:59:23.982179 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:59:23.982191 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:59:23.982204 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:59:23.982216 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 05:59:23.982229 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:59:23.982241 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 05:59:23.982253 | orchestrator | 2026-03-25 05:59:23.982266 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-03-25 05:59:23.982278 | orchestrator | Wednesday 25 March 2026 05:58:41 +0000 (0:00:03.112) 0:50:58.658 ******* 2026-03-25 05:59:23.982290 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:59:23.982303 | orchestrator | 2026-03-25 05:59:23.982315 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-03-25 05:59:23.982326 | orchestrator | Wednesday 25 March 2026 05:58:44 +0000 (0:00:03.339) 0:51:01.998 ******* 2026-03-25 05:59:23.982338 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:59:23.982350 | orchestrator | 2026-03-25 05:59:23.982362 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-03-25 05:59:23.982374 | orchestrator | Wednesday 25 March 2026 05:58:47 +0000 (0:00:02.962) 0:51:04.960 ******* 2026-03-25 05:59:23.982386 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:59:23.982397 | orchestrator | 2026-03-25 05:59:23.982408 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-03-25 05:59:23.982418 | orchestrator | Wednesday 25 March 2026 05:58:50 +0000 (0:00:02.223) 0:51:07.184 ******* 2026-03-25 05:59:23.982458 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4732', 'value': {'gid': 4732, 'name': 'testbed-node-4', 'rank': 0, 'incarnation': 7, 'state': 'up:active', 'state_seq': 1228, 'addr': '192.168.16.14:6817/3241693327', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.14:6816', 'nonce': 3241693327}, {'type': 'v1', 'addr': '192.168.16.14:6817', 'nonce': 3241693327}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-03-25 05:59:23.982484 | orchestrator | 2026-03-25 05:59:23.982495 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-03-25 05:59:23.982506 | orchestrator | Wednesday 25 March 2026 05:58:51 +0000 (0:00:01.229) 0:51:08.414 ******* 2026-03-25 05:59:23.982517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-25 05:59:23.982528 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-4) 2026-03-25 05:59:23.982539 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-25 05:59:23.982550 | orchestrator | 2026-03-25 05:59:23.982561 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-03-25 05:59:23.982571 | orchestrator | Wednesday 25 March 2026 05:58:52 +0000 (0:00:01.585) 0:51:09.999 ******* 2026-03-25 05:59:23.982582 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-03-25 05:59:23.982593 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-03-25 05:59:23.982604 | orchestrator | 2026-03-25 05:59:23.982614 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-03-25 05:59:23.982625 | orchestrator | Wednesday 25 March 2026 05:58:54 +0000 (0:00:01.526) 0:51:11.525 ******* 2026-03-25 05:59:23.982636 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:59:23.982647 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:59:23.982658 | orchestrator | 2026-03-25 05:59:23.982669 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-03-25 05:59:23.982680 | orchestrator | Wednesday 25 March 2026 05:59:05 +0000 (0:00:10.695) 0:51:22.221 ******* 2026-03-25 05:59:23.982691 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 05:59:23.982702 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 05:59:23.982713 | orchestrator | 2026-03-25 05:59:23.982723 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-03-25 05:59:23.982734 | orchestrator | Wednesday 25 March 2026 05:59:09 +0000 (0:00:03.858) 0:51:26.079 ******* 2026-03-25 05:59:23.982745 | orchestrator | ok: [testbed-node-0] 2026-03-25 05:59:23.982756 | orchestrator | 2026-03-25 05:59:23.982767 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-03-25 05:59:23.982777 | orchestrator | Wednesday 25 March 2026 05:59:11 +0000 (0:00:02.192) 0:51:28.272 ******* 2026-03-25 05:59:23.982788 | orchestrator | changed: [testbed-node-0] 2026-03-25 05:59:23.982799 | orchestrator | 2026-03-25 05:59:23.982810 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-03-25 05:59:23.982820 | orchestrator | 2026-03-25 05:59:23.982831 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 05:59:23.982842 | orchestrator | Wednesday 25 March 2026 05:59:12 +0000 (0:00:01.613) 0:51:29.885 ******* 2026-03-25 05:59:23.982852 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-25 05:59:23.982883 | orchestrator | 2026-03-25 05:59:23.982894 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 05:59:23.982905 | orchestrator | Wednesday 25 March 2026 05:59:14 +0000 (0:00:01.323) 0:51:31.208 ******* 2026-03-25 05:59:23.982915 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:23.982926 | orchestrator | 2026-03-25 05:59:23.982937 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 05:59:23.982948 | orchestrator | Wednesday 25 March 2026 05:59:15 +0000 (0:00:01.481) 0:51:32.690 ******* 2026-03-25 05:59:23.982958 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:23.982969 | orchestrator | 2026-03-25 05:59:23.982980 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 05:59:23.982990 | orchestrator | Wednesday 25 March 2026 05:59:16 +0000 (0:00:01.133) 0:51:33.823 ******* 2026-03-25 05:59:23.983001 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:23.983020 | orchestrator | 2026-03-25 05:59:23.983032 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 05:59:23.983042 | orchestrator | Wednesday 25 March 2026 05:59:18 +0000 (0:00:01.441) 0:51:35.265 ******* 2026-03-25 05:59:23.983053 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:23.983064 | orchestrator | 2026-03-25 05:59:23.983074 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 05:59:23.983085 | orchestrator | Wednesday 25 March 2026 05:59:19 +0000 (0:00:01.158) 0:51:36.424 ******* 2026-03-25 05:59:23.983096 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:23.983107 | orchestrator | 2026-03-25 05:59:23.983117 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 05:59:23.983128 | orchestrator | Wednesday 25 March 2026 05:59:20 +0000 (0:00:01.158) 0:51:37.583 ******* 2026-03-25 05:59:23.983139 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:23.983150 | orchestrator | 2026-03-25 05:59:23.983160 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 05:59:23.983171 | orchestrator | Wednesday 25 March 2026 05:59:21 +0000 (0:00:01.134) 0:51:38.717 ******* 2026-03-25 05:59:23.983182 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:23.983193 | orchestrator | 2026-03-25 05:59:23.983204 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 05:59:23.983215 | orchestrator | Wednesday 25 March 2026 05:59:22 +0000 (0:00:01.147) 0:51:39.864 ******* 2026-03-25 05:59:23.983225 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:23.983236 | orchestrator | 2026-03-25 05:59:23.983253 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 05:59:49.404528 | orchestrator | Wednesday 25 March 2026 05:59:23 +0000 (0:00:01.122) 0:51:40.987 ******* 2026-03-25 05:59:49.404643 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:59:49.404660 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:59:49.404672 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:59:49.404683 | orchestrator | 2026-03-25 05:59:49.404712 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 05:59:49.404723 | orchestrator | Wednesday 25 March 2026 05:59:26 +0000 (0:00:02.048) 0:51:43.035 ******* 2026-03-25 05:59:49.404734 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:49.404746 | orchestrator | 2026-03-25 05:59:49.404757 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 05:59:49.404768 | orchestrator | Wednesday 25 March 2026 05:59:27 +0000 (0:00:01.419) 0:51:44.455 ******* 2026-03-25 05:59:49.404779 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 05:59:49.404790 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 05:59:49.404801 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 05:59:49.404811 | orchestrator | 2026-03-25 05:59:49.404822 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 05:59:49.404833 | orchestrator | Wednesday 25 March 2026 05:59:30 +0000 (0:00:03.235) 0:51:47.690 ******* 2026-03-25 05:59:49.404844 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 05:59:49.404939 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 05:59:49.404952 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 05:59:49.404963 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:49.404974 | orchestrator | 2026-03-25 05:59:49.404985 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 05:59:49.404996 | orchestrator | Wednesday 25 March 2026 05:59:32 +0000 (0:00:01.848) 0:51:49.539 ******* 2026-03-25 05:59:49.405009 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 05:59:49.405047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 05:59:49.405059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 05:59:49.405070 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:49.405081 | orchestrator | 2026-03-25 05:59:49.405092 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 05:59:49.405103 | orchestrator | Wednesday 25 March 2026 05:59:34 +0000 (0:00:01.643) 0:51:51.182 ******* 2026-03-25 05:59:49.405116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:49.405130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:49.405142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:49.405153 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:49.405164 | orchestrator | 2026-03-25 05:59:49.405175 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 05:59:49.405187 | orchestrator | Wednesday 25 March 2026 05:59:35 +0000 (0:00:01.243) 0:51:52.426 ******* 2026-03-25 05:59:49.405225 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 05:59:28.324144', 'end': '2026-03-25 05:59:28.373923', 'delta': '0:00:00.049779', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 05:59:49.405241 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 05:59:28.885973', 'end': '2026-03-25 05:59:28.922545', 'delta': '0:00:00.036572', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 05:59:49.405261 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 05:59:29.429789', 'end': '2026-03-25 05:59:29.478477', 'delta': '0:00:00.048688', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 05:59:49.405272 | orchestrator | 2026-03-25 05:59:49.405283 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 05:59:49.405294 | orchestrator | Wednesday 25 March 2026 05:59:36 +0000 (0:00:01.233) 0:51:53.659 ******* 2026-03-25 05:59:49.405305 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:49.405316 | orchestrator | 2026-03-25 05:59:49.405327 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 05:59:49.405338 | orchestrator | Wednesday 25 March 2026 05:59:37 +0000 (0:00:01.246) 0:51:54.906 ******* 2026-03-25 05:59:49.405349 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:49.405359 | orchestrator | 2026-03-25 05:59:49.405370 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 05:59:49.405381 | orchestrator | Wednesday 25 March 2026 05:59:39 +0000 (0:00:01.252) 0:51:56.159 ******* 2026-03-25 05:59:49.405391 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:49.405402 | orchestrator | 2026-03-25 05:59:49.405413 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 05:59:49.405424 | orchestrator | Wednesday 25 March 2026 05:59:40 +0000 (0:00:01.187) 0:51:57.347 ******* 2026-03-25 05:59:49.405434 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-25 05:59:49.405445 | orchestrator | 2026-03-25 05:59:49.405456 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:59:49.405466 | orchestrator | Wednesday 25 March 2026 05:59:42 +0000 (0:00:01.990) 0:51:59.337 ******* 2026-03-25 05:59:49.405477 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:49.405488 | orchestrator | 2026-03-25 05:59:49.405499 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 05:59:49.405510 | orchestrator | Wednesday 25 March 2026 05:59:43 +0000 (0:00:01.140) 0:52:00.478 ******* 2026-03-25 05:59:49.405520 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:49.405531 | orchestrator | 2026-03-25 05:59:49.405542 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 05:59:49.405553 | orchestrator | Wednesday 25 March 2026 05:59:44 +0000 (0:00:01.138) 0:52:01.617 ******* 2026-03-25 05:59:49.405564 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:49.405575 | orchestrator | 2026-03-25 05:59:49.405585 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 05:59:49.405596 | orchestrator | Wednesday 25 March 2026 05:59:45 +0000 (0:00:01.254) 0:52:02.871 ******* 2026-03-25 05:59:49.405607 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:49.405618 | orchestrator | 2026-03-25 05:59:49.405628 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 05:59:49.405639 | orchestrator | Wednesday 25 March 2026 05:59:46 +0000 (0:00:01.109) 0:52:03.981 ******* 2026-03-25 05:59:49.405650 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:49.405661 | orchestrator | 2026-03-25 05:59:49.405671 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 05:59:49.405682 | orchestrator | Wednesday 25 March 2026 05:59:48 +0000 (0:00:01.122) 0:52:05.104 ******* 2026-03-25 05:59:49.405700 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:54.321395 | orchestrator | 2026-03-25 05:59:54.321497 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 05:59:54.321514 | orchestrator | Wednesday 25 March 2026 05:59:49 +0000 (0:00:01.306) 0:52:06.410 ******* 2026-03-25 05:59:54.321526 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:54.321538 | orchestrator | 2026-03-25 05:59:54.321549 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 05:59:54.321577 | orchestrator | Wednesday 25 March 2026 05:59:50 +0000 (0:00:01.208) 0:52:07.618 ******* 2026-03-25 05:59:54.321589 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:54.321601 | orchestrator | 2026-03-25 05:59:54.321612 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 05:59:54.321623 | orchestrator | Wednesday 25 March 2026 05:59:51 +0000 (0:00:01.188) 0:52:08.808 ******* 2026-03-25 05:59:54.321634 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:54.321645 | orchestrator | 2026-03-25 05:59:54.321656 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 05:59:54.321668 | orchestrator | Wednesday 25 March 2026 05:59:52 +0000 (0:00:01.128) 0:52:09.936 ******* 2026-03-25 05:59:54.321679 | orchestrator | ok: [testbed-node-4] 2026-03-25 05:59:54.321689 | orchestrator | 2026-03-25 05:59:54.321700 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 05:59:54.321711 | orchestrator | Wednesday 25 March 2026 05:59:54 +0000 (0:00:01.165) 0:52:11.101 ******* 2026-03-25 05:59:54.321725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:59:54.321742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'uuids': ['1a1bfadf-e219-47e2-8705-0963963507ec'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq']}})  2026-03-25 05:59:54.321756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e1f7d9f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:59:54.321769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f']}})  2026-03-25 05:59:54.321803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:59:54.321833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:59:54.321881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 05:59:54.321895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:59:54.321906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp', 'dm-uuid-CRYPT-LUKS2-d0a28742b6dc46aab152442a6244f51b-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:59:54.321917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:59:54.321930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'uuids': ['d0a28742-b6dc-46aa-b152-442a6244f51b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp']}})  2026-03-25 05:59:54.321944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138']}})  2026-03-25 05:59:54.321973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:59:55.699938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cb51c54', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 05:59:55.700052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:59:55.700071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 05:59:55.700108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq', 'dm-uuid-CRYPT-LUKS2-1a1bfadfe21947e287050963963507ec-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 05:59:55.700122 | orchestrator | skipping: [testbed-node-4] 2026-03-25 05:59:55.700136 | orchestrator | 2026-03-25 05:59:55.700148 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 05:59:55.700160 | orchestrator | Wednesday 25 March 2026 05:59:55 +0000 (0:00:01.370) 0:52:12.472 ******* 2026-03-25 05:59:55.700203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:55.700227 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'uuids': ['1a1bfadf-e219-47e2-8705-0963963507ec'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:55.700248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e1f7d9f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:55.700268 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:55.700300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:55.700330 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905362 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp', 'dm-uuid-CRYPT-LUKS2-d0a28742b6dc46aab152442a6244f51b-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905377 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905402 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'uuids': ['d0a28742-b6dc-46aa-b152-442a6244f51b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138']}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cb51c54', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 05:59:56.905506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:00:32.325047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq', 'dm-uuid-CRYPT-LUKS2-1a1bfadfe21947e287050963963507ec-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:00:32.325166 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.325184 | orchestrator | 2026-03-25 06:00:32.325197 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 06:00:32.325210 | orchestrator | Wednesday 25 March 2026 05:59:56 +0000 (0:00:01.440) 0:52:13.913 ******* 2026-03-25 06:00:32.325221 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:00:32.325233 | orchestrator | 2026-03-25 06:00:32.325244 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 06:00:32.325255 | orchestrator | Wednesday 25 March 2026 05:59:58 +0000 (0:00:01.577) 0:52:15.491 ******* 2026-03-25 06:00:32.325265 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:00:32.325276 | orchestrator | 2026-03-25 06:00:32.325287 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:00:32.325322 | orchestrator | Wednesday 25 March 2026 05:59:59 +0000 (0:00:01.222) 0:52:16.714 ******* 2026-03-25 06:00:32.325335 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:00:32.325346 | orchestrator | 2026-03-25 06:00:32.325356 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:00:32.325367 | orchestrator | Wednesday 25 March 2026 06:00:01 +0000 (0:00:01.519) 0:52:18.233 ******* 2026-03-25 06:00:32.325378 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.325389 | orchestrator | 2026-03-25 06:00:32.325400 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:00:32.325508 | orchestrator | Wednesday 25 March 2026 06:00:02 +0000 (0:00:01.175) 0:52:19.408 ******* 2026-03-25 06:00:32.325524 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.325535 | orchestrator | 2026-03-25 06:00:32.325546 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:00:32.325557 | orchestrator | Wednesday 25 March 2026 06:00:03 +0000 (0:00:01.280) 0:52:20.689 ******* 2026-03-25 06:00:32.325569 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.325582 | orchestrator | 2026-03-25 06:00:32.325594 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 06:00:32.325607 | orchestrator | Wednesday 25 March 2026 06:00:04 +0000 (0:00:01.141) 0:52:21.831 ******* 2026-03-25 06:00:32.325620 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-25 06:00:32.325633 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-25 06:00:32.325645 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-25 06:00:32.325658 | orchestrator | 2026-03-25 06:00:32.325671 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 06:00:32.325683 | orchestrator | Wednesday 25 March 2026 06:00:07 +0000 (0:00:02.229) 0:52:24.061 ******* 2026-03-25 06:00:32.325696 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 06:00:32.325708 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 06:00:32.325722 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 06:00:32.325734 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.325746 | orchestrator | 2026-03-25 06:00:32.325758 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 06:00:32.325771 | orchestrator | Wednesday 25 March 2026 06:00:08 +0000 (0:00:01.165) 0:52:25.227 ******* 2026-03-25 06:00:32.325783 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-25 06:00:32.325796 | orchestrator | 2026-03-25 06:00:32.325809 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:00:32.325823 | orchestrator | Wednesday 25 March 2026 06:00:09 +0000 (0:00:01.148) 0:52:26.375 ******* 2026-03-25 06:00:32.325866 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.325881 | orchestrator | 2026-03-25 06:00:32.325893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:00:32.325905 | orchestrator | Wednesday 25 March 2026 06:00:10 +0000 (0:00:01.198) 0:52:27.574 ******* 2026-03-25 06:00:32.325918 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.325930 | orchestrator | 2026-03-25 06:00:32.325941 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:00:32.325967 | orchestrator | Wednesday 25 March 2026 06:00:11 +0000 (0:00:01.165) 0:52:28.739 ******* 2026-03-25 06:00:32.325979 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.325989 | orchestrator | 2026-03-25 06:00:32.326000 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:00:32.326011 | orchestrator | Wednesday 25 March 2026 06:00:12 +0000 (0:00:01.156) 0:52:29.895 ******* 2026-03-25 06:00:32.326086 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:00:32.326097 | orchestrator | 2026-03-25 06:00:32.326108 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:00:32.326129 | orchestrator | Wednesday 25 March 2026 06:00:14 +0000 (0:00:01.250) 0:52:31.146 ******* 2026-03-25 06:00:32.326140 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:00:32.326172 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:00:32.326183 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:00:32.326194 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.326205 | orchestrator | 2026-03-25 06:00:32.326216 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:00:32.326226 | orchestrator | Wednesday 25 March 2026 06:00:15 +0000 (0:00:01.443) 0:52:32.590 ******* 2026-03-25 06:00:32.326237 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:00:32.326248 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:00:32.326258 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:00:32.326269 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.326280 | orchestrator | 2026-03-25 06:00:32.326291 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:00:32.326301 | orchestrator | Wednesday 25 March 2026 06:00:16 +0000 (0:00:01.401) 0:52:33.991 ******* 2026-03-25 06:00:32.326312 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:00:32.326323 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:00:32.326333 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:00:32.326344 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.326355 | orchestrator | 2026-03-25 06:00:32.326366 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:00:32.326376 | orchestrator | Wednesday 25 March 2026 06:00:18 +0000 (0:00:01.477) 0:52:35.469 ******* 2026-03-25 06:00:32.326387 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:00:32.326398 | orchestrator | 2026-03-25 06:00:32.326408 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:00:32.326419 | orchestrator | Wednesday 25 March 2026 06:00:19 +0000 (0:00:01.138) 0:52:36.607 ******* 2026-03-25 06:00:32.326430 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 06:00:32.326441 | orchestrator | 2026-03-25 06:00:32.326452 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 06:00:32.326462 | orchestrator | Wednesday 25 March 2026 06:00:21 +0000 (0:00:01.713) 0:52:38.321 ******* 2026-03-25 06:00:32.326473 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:00:32.326484 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:00:32.326495 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:00:32.326506 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 06:00:32.326517 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-25 06:00:32.326527 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 06:00:32.326538 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:00:32.326549 | orchestrator | 2026-03-25 06:00:32.326560 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 06:00:32.326571 | orchestrator | Wednesday 25 March 2026 06:00:23 +0000 (0:00:02.295) 0:52:40.617 ******* 2026-03-25 06:00:32.326581 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:00:32.326592 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:00:32.326603 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:00:32.326614 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 06:00:32.326624 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-25 06:00:32.326643 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 06:00:32.326654 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:00:32.326665 | orchestrator | 2026-03-25 06:00:32.326676 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-03-25 06:00:32.326687 | orchestrator | Wednesday 25 March 2026 06:00:26 +0000 (0:00:02.647) 0:52:43.265 ******* 2026-03-25 06:00:32.326698 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.326708 | orchestrator | 2026-03-25 06:00:32.326719 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 06:00:32.326730 | orchestrator | Wednesday 25 March 2026 06:00:27 +0000 (0:00:01.109) 0:52:44.374 ******* 2026-03-25 06:00:32.326741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-25 06:00:32.326752 | orchestrator | 2026-03-25 06:00:32.326763 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 06:00:32.326773 | orchestrator | Wednesday 25 March 2026 06:00:28 +0000 (0:00:01.145) 0:52:45.520 ******* 2026-03-25 06:00:32.326790 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-25 06:00:32.326802 | orchestrator | 2026-03-25 06:00:32.326813 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 06:00:32.326823 | orchestrator | Wednesday 25 March 2026 06:00:29 +0000 (0:00:01.149) 0:52:46.670 ******* 2026-03-25 06:00:32.326834 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:00:32.326860 | orchestrator | 2026-03-25 06:00:32.326871 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 06:00:32.326882 | orchestrator | Wednesday 25 March 2026 06:00:30 +0000 (0:00:01.163) 0:52:47.833 ******* 2026-03-25 06:00:32.326893 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:00:32.326904 | orchestrator | 2026-03-25 06:00:32.326922 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 06:00:32.326949 | orchestrator | Wednesday 25 March 2026 06:00:32 +0000 (0:00:01.494) 0:52:49.328 ******* 2026-03-25 06:01:23.630511 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.630632 | orchestrator | 2026-03-25 06:01:23.630651 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 06:01:23.630664 | orchestrator | Wednesday 25 March 2026 06:00:33 +0000 (0:00:01.511) 0:52:50.840 ******* 2026-03-25 06:01:23.630676 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.630687 | orchestrator | 2026-03-25 06:01:23.630698 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 06:01:23.630709 | orchestrator | Wednesday 25 March 2026 06:00:35 +0000 (0:00:01.529) 0:52:52.369 ******* 2026-03-25 06:01:23.630720 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.630731 | orchestrator | 2026-03-25 06:01:23.630742 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 06:01:23.630753 | orchestrator | Wednesday 25 March 2026 06:00:36 +0000 (0:00:01.110) 0:52:53.480 ******* 2026-03-25 06:01:23.630764 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.630774 | orchestrator | 2026-03-25 06:01:23.630785 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 06:01:23.630928 | orchestrator | Wednesday 25 March 2026 06:00:37 +0000 (0:00:01.127) 0:52:54.607 ******* 2026-03-25 06:01:23.630940 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.630951 | orchestrator | 2026-03-25 06:01:23.630962 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 06:01:23.630974 | orchestrator | Wednesday 25 March 2026 06:00:38 +0000 (0:00:01.210) 0:52:55.818 ******* 2026-03-25 06:01:23.630985 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.630995 | orchestrator | 2026-03-25 06:01:23.631006 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 06:01:23.631017 | orchestrator | Wednesday 25 March 2026 06:00:40 +0000 (0:00:01.551) 0:52:57.369 ******* 2026-03-25 06:01:23.631052 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.631066 | orchestrator | 2026-03-25 06:01:23.631079 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 06:01:23.631092 | orchestrator | Wednesday 25 March 2026 06:00:41 +0000 (0:00:01.520) 0:52:58.890 ******* 2026-03-25 06:01:23.631104 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631117 | orchestrator | 2026-03-25 06:01:23.631129 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 06:01:23.631142 | orchestrator | Wednesday 25 March 2026 06:00:42 +0000 (0:00:01.112) 0:53:00.003 ******* 2026-03-25 06:01:23.631154 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631168 | orchestrator | 2026-03-25 06:01:23.631180 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 06:01:23.631192 | orchestrator | Wednesday 25 March 2026 06:00:44 +0000 (0:00:01.173) 0:53:01.176 ******* 2026-03-25 06:01:23.631204 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.631216 | orchestrator | 2026-03-25 06:01:23.631229 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 06:01:23.631242 | orchestrator | Wednesday 25 March 2026 06:00:45 +0000 (0:00:01.170) 0:53:02.347 ******* 2026-03-25 06:01:23.631254 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.631266 | orchestrator | 2026-03-25 06:01:23.631278 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 06:01:23.631290 | orchestrator | Wednesday 25 March 2026 06:00:46 +0000 (0:00:01.177) 0:53:03.525 ******* 2026-03-25 06:01:23.631301 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.631311 | orchestrator | 2026-03-25 06:01:23.631322 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 06:01:23.631333 | orchestrator | Wednesday 25 March 2026 06:00:47 +0000 (0:00:01.205) 0:53:04.730 ******* 2026-03-25 06:01:23.631343 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631354 | orchestrator | 2026-03-25 06:01:23.631365 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 06:01:23.631376 | orchestrator | Wednesday 25 March 2026 06:00:48 +0000 (0:00:01.099) 0:53:05.830 ******* 2026-03-25 06:01:23.631387 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631398 | orchestrator | 2026-03-25 06:01:23.631409 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 06:01:23.631420 | orchestrator | Wednesday 25 March 2026 06:00:49 +0000 (0:00:01.128) 0:53:06.958 ******* 2026-03-25 06:01:23.631430 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631441 | orchestrator | 2026-03-25 06:01:23.631451 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 06:01:23.631462 | orchestrator | Wednesday 25 March 2026 06:00:51 +0000 (0:00:01.233) 0:53:08.192 ******* 2026-03-25 06:01:23.631473 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.631483 | orchestrator | 2026-03-25 06:01:23.631494 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 06:01:23.631504 | orchestrator | Wednesday 25 March 2026 06:00:52 +0000 (0:00:01.149) 0:53:09.342 ******* 2026-03-25 06:01:23.631515 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.631525 | orchestrator | 2026-03-25 06:01:23.631536 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 06:01:23.631547 | orchestrator | Wednesday 25 March 2026 06:00:53 +0000 (0:00:01.297) 0:53:10.640 ******* 2026-03-25 06:01:23.631557 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631568 | orchestrator | 2026-03-25 06:01:23.631579 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 06:01:23.631604 | orchestrator | Wednesday 25 March 2026 06:00:54 +0000 (0:00:01.127) 0:53:11.768 ******* 2026-03-25 06:01:23.631616 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631626 | orchestrator | 2026-03-25 06:01:23.631637 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 06:01:23.631648 | orchestrator | Wednesday 25 March 2026 06:00:55 +0000 (0:00:01.169) 0:53:12.937 ******* 2026-03-25 06:01:23.631667 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631678 | orchestrator | 2026-03-25 06:01:23.631688 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 06:01:23.631699 | orchestrator | Wednesday 25 March 2026 06:00:57 +0000 (0:00:01.118) 0:53:14.056 ******* 2026-03-25 06:01:23.631710 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631720 | orchestrator | 2026-03-25 06:01:23.631731 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 06:01:23.631760 | orchestrator | Wednesday 25 March 2026 06:00:58 +0000 (0:00:01.104) 0:53:15.161 ******* 2026-03-25 06:01:23.631772 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631783 | orchestrator | 2026-03-25 06:01:23.631794 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 06:01:23.631804 | orchestrator | Wednesday 25 March 2026 06:00:59 +0000 (0:00:01.155) 0:53:16.316 ******* 2026-03-25 06:01:23.631815 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631830 | orchestrator | 2026-03-25 06:01:23.631876 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 06:01:23.631896 | orchestrator | Wednesday 25 March 2026 06:01:00 +0000 (0:00:01.103) 0:53:17.420 ******* 2026-03-25 06:01:23.631907 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631917 | orchestrator | 2026-03-25 06:01:23.631928 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 06:01:23.631939 | orchestrator | Wednesday 25 March 2026 06:01:01 +0000 (0:00:01.090) 0:53:18.511 ******* 2026-03-25 06:01:23.631950 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.631960 | orchestrator | 2026-03-25 06:01:23.631971 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 06:01:23.631981 | orchestrator | Wednesday 25 March 2026 06:01:02 +0000 (0:00:01.179) 0:53:19.690 ******* 2026-03-25 06:01:23.631992 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.632002 | orchestrator | 2026-03-25 06:01:23.632013 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 06:01:23.632023 | orchestrator | Wednesday 25 March 2026 06:01:03 +0000 (0:00:01.155) 0:53:20.846 ******* 2026-03-25 06:01:23.632034 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.632045 | orchestrator | 2026-03-25 06:01:23.632055 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 06:01:23.632066 | orchestrator | Wednesday 25 March 2026 06:01:04 +0000 (0:00:01.137) 0:53:21.983 ******* 2026-03-25 06:01:23.632077 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.632087 | orchestrator | 2026-03-25 06:01:23.632097 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 06:01:23.632108 | orchestrator | Wednesday 25 March 2026 06:01:06 +0000 (0:00:01.286) 0:53:23.269 ******* 2026-03-25 06:01:23.632119 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.632129 | orchestrator | 2026-03-25 06:01:23.632139 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 06:01:23.632150 | orchestrator | Wednesday 25 March 2026 06:01:07 +0000 (0:00:01.308) 0:53:24.578 ******* 2026-03-25 06:01:23.632160 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.632171 | orchestrator | 2026-03-25 06:01:23.632181 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 06:01:23.632192 | orchestrator | Wednesday 25 March 2026 06:01:09 +0000 (0:00:02.096) 0:53:26.674 ******* 2026-03-25 06:01:23.632202 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.632213 | orchestrator | 2026-03-25 06:01:23.632223 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 06:01:23.632234 | orchestrator | Wednesday 25 March 2026 06:01:12 +0000 (0:00:02.347) 0:53:29.022 ******* 2026-03-25 06:01:23.632244 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-25 06:01:23.632256 | orchestrator | 2026-03-25 06:01:23.632267 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 06:01:23.632285 | orchestrator | Wednesday 25 March 2026 06:01:13 +0000 (0:00:01.148) 0:53:30.171 ******* 2026-03-25 06:01:23.632296 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.632306 | orchestrator | 2026-03-25 06:01:23.632317 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 06:01:23.632327 | orchestrator | Wednesday 25 March 2026 06:01:14 +0000 (0:00:01.153) 0:53:31.324 ******* 2026-03-25 06:01:23.632338 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.632349 | orchestrator | 2026-03-25 06:01:23.632359 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 06:01:23.632370 | orchestrator | Wednesday 25 March 2026 06:01:15 +0000 (0:00:01.213) 0:53:32.537 ******* 2026-03-25 06:01:23.632380 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 06:01:23.632391 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 06:01:23.632402 | orchestrator | 2026-03-25 06:01:23.632412 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 06:01:23.632423 | orchestrator | Wednesday 25 March 2026 06:01:17 +0000 (0:00:01.786) 0:53:34.324 ******* 2026-03-25 06:01:23.632434 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:01:23.632445 | orchestrator | 2026-03-25 06:01:23.632460 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 06:01:23.632478 | orchestrator | Wednesday 25 March 2026 06:01:18 +0000 (0:00:01.483) 0:53:35.808 ******* 2026-03-25 06:01:23.632497 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.632508 | orchestrator | 2026-03-25 06:01:23.632519 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 06:01:23.632529 | orchestrator | Wednesday 25 March 2026 06:01:20 +0000 (0:00:01.227) 0:53:37.036 ******* 2026-03-25 06:01:23.632540 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.632550 | orchestrator | 2026-03-25 06:01:23.632567 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 06:01:23.632578 | orchestrator | Wednesday 25 March 2026 06:01:21 +0000 (0:00:01.178) 0:53:38.214 ******* 2026-03-25 06:01:23.632589 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:01:23.632599 | orchestrator | 2026-03-25 06:01:23.632610 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 06:01:23.632621 | orchestrator | Wednesday 25 March 2026 06:01:22 +0000 (0:00:01.146) 0:53:39.361 ******* 2026-03-25 06:01:23.632631 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-25 06:01:23.632642 | orchestrator | 2026-03-25 06:01:23.632652 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 06:01:23.632672 | orchestrator | Wednesday 25 March 2026 06:01:23 +0000 (0:00:01.272) 0:53:40.633 ******* 2026-03-25 06:02:10.557559 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:02:10.557701 | orchestrator | 2026-03-25 06:02:10.557721 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 06:02:10.557735 | orchestrator | Wednesday 25 March 2026 06:01:25 +0000 (0:00:01.752) 0:53:42.386 ******* 2026-03-25 06:02:10.557747 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 06:02:10.557758 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 06:02:10.557770 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 06:02:10.557781 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.557793 | orchestrator | 2026-03-25 06:02:10.557804 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 06:02:10.557816 | orchestrator | Wednesday 25 March 2026 06:01:26 +0000 (0:00:01.197) 0:53:43.583 ******* 2026-03-25 06:02:10.557826 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.557907 | orchestrator | 2026-03-25 06:02:10.557922 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 06:02:10.557933 | orchestrator | Wednesday 25 March 2026 06:01:27 +0000 (0:00:01.134) 0:53:44.718 ******* 2026-03-25 06:02:10.557971 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.557982 | orchestrator | 2026-03-25 06:02:10.557993 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 06:02:10.558005 | orchestrator | Wednesday 25 March 2026 06:01:28 +0000 (0:00:01.174) 0:53:45.892 ******* 2026-03-25 06:02:10.558068 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558084 | orchestrator | 2026-03-25 06:02:10.558097 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 06:02:10.558121 | orchestrator | Wednesday 25 March 2026 06:01:29 +0000 (0:00:01.120) 0:53:47.012 ******* 2026-03-25 06:02:10.558145 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558158 | orchestrator | 2026-03-25 06:02:10.558171 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 06:02:10.558184 | orchestrator | Wednesday 25 March 2026 06:01:31 +0000 (0:00:01.175) 0:53:48.188 ******* 2026-03-25 06:02:10.558196 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558208 | orchestrator | 2026-03-25 06:02:10.558221 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 06:02:10.558234 | orchestrator | Wednesday 25 March 2026 06:01:32 +0000 (0:00:01.134) 0:53:49.322 ******* 2026-03-25 06:02:10.558246 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:02:10.558259 | orchestrator | 2026-03-25 06:02:10.558271 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 06:02:10.558283 | orchestrator | Wednesday 25 March 2026 06:01:34 +0000 (0:00:02.537) 0:53:51.860 ******* 2026-03-25 06:02:10.558295 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:02:10.558307 | orchestrator | 2026-03-25 06:02:10.558324 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 06:02:10.558346 | orchestrator | Wednesday 25 March 2026 06:01:36 +0000 (0:00:01.157) 0:53:53.018 ******* 2026-03-25 06:02:10.558365 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-25 06:02:10.558385 | orchestrator | 2026-03-25 06:02:10.558405 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 06:02:10.558424 | orchestrator | Wednesday 25 March 2026 06:01:37 +0000 (0:00:01.109) 0:53:54.128 ******* 2026-03-25 06:02:10.558443 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558462 | orchestrator | 2026-03-25 06:02:10.558483 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 06:02:10.558502 | orchestrator | Wednesday 25 March 2026 06:01:38 +0000 (0:00:01.136) 0:53:55.264 ******* 2026-03-25 06:02:10.558521 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558542 | orchestrator | 2026-03-25 06:02:10.558562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 06:02:10.558582 | orchestrator | Wednesday 25 March 2026 06:01:39 +0000 (0:00:01.180) 0:53:56.445 ******* 2026-03-25 06:02:10.558602 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558622 | orchestrator | 2026-03-25 06:02:10.558643 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 06:02:10.558664 | orchestrator | Wednesday 25 March 2026 06:01:40 +0000 (0:00:01.213) 0:53:57.659 ******* 2026-03-25 06:02:10.558683 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558699 | orchestrator | 2026-03-25 06:02:10.558711 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 06:02:10.558721 | orchestrator | Wednesday 25 March 2026 06:01:41 +0000 (0:00:01.132) 0:53:58.792 ******* 2026-03-25 06:02:10.558732 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558743 | orchestrator | 2026-03-25 06:02:10.558753 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 06:02:10.558764 | orchestrator | Wednesday 25 March 2026 06:01:42 +0000 (0:00:01.174) 0:53:59.967 ******* 2026-03-25 06:02:10.558775 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558786 | orchestrator | 2026-03-25 06:02:10.558796 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 06:02:10.558868 | orchestrator | Wednesday 25 March 2026 06:01:44 +0000 (0:00:01.175) 0:54:01.142 ******* 2026-03-25 06:02:10.558882 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558893 | orchestrator | 2026-03-25 06:02:10.558904 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 06:02:10.558915 | orchestrator | Wednesday 25 March 2026 06:01:45 +0000 (0:00:01.179) 0:54:02.322 ******* 2026-03-25 06:02:10.558926 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.558937 | orchestrator | 2026-03-25 06:02:10.558948 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 06:02:10.558959 | orchestrator | Wednesday 25 March 2026 06:01:46 +0000 (0:00:01.154) 0:54:03.477 ******* 2026-03-25 06:02:10.558970 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:02:10.558980 | orchestrator | 2026-03-25 06:02:10.558991 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 06:02:10.559025 | orchestrator | Wednesday 25 March 2026 06:01:47 +0000 (0:00:01.185) 0:54:04.662 ******* 2026-03-25 06:02:10.559037 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-25 06:02:10.559049 | orchestrator | 2026-03-25 06:02:10.559060 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 06:02:10.559071 | orchestrator | Wednesday 25 March 2026 06:01:48 +0000 (0:00:01.127) 0:54:05.790 ******* 2026-03-25 06:02:10.559082 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-25 06:02:10.559092 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-25 06:02:10.559103 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-25 06:02:10.559114 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-25 06:02:10.559125 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-25 06:02:10.559135 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-25 06:02:10.559146 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-25 06:02:10.559157 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-25 06:02:10.559167 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 06:02:10.559178 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 06:02:10.559189 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 06:02:10.559200 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 06:02:10.559211 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 06:02:10.559221 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 06:02:10.559232 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-25 06:02:10.559243 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-25 06:02:10.559254 | orchestrator | 2026-03-25 06:02:10.559265 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 06:02:10.559275 | orchestrator | Wednesday 25 March 2026 06:01:55 +0000 (0:00:06.548) 0:54:12.338 ******* 2026-03-25 06:02:10.559286 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-25 06:02:10.559297 | orchestrator | 2026-03-25 06:02:10.559308 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-25 06:02:10.559319 | orchestrator | Wednesday 25 March 2026 06:01:56 +0000 (0:00:01.156) 0:54:13.495 ******* 2026-03-25 06:02:10.559330 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:02:10.559342 | orchestrator | 2026-03-25 06:02:10.559353 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-25 06:02:10.559364 | orchestrator | Wednesday 25 March 2026 06:01:58 +0000 (0:00:01.540) 0:54:15.036 ******* 2026-03-25 06:02:10.559375 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:02:10.559393 | orchestrator | 2026-03-25 06:02:10.559404 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 06:02:10.559415 | orchestrator | Wednesday 25 March 2026 06:02:00 +0000 (0:00:02.063) 0:54:17.099 ******* 2026-03-25 06:02:10.559426 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.559437 | orchestrator | 2026-03-25 06:02:10.559447 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 06:02:10.559458 | orchestrator | Wednesday 25 March 2026 06:02:01 +0000 (0:00:01.144) 0:54:18.244 ******* 2026-03-25 06:02:10.559469 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.559480 | orchestrator | 2026-03-25 06:02:10.559490 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 06:02:10.559501 | orchestrator | Wednesday 25 March 2026 06:02:02 +0000 (0:00:01.134) 0:54:19.379 ******* 2026-03-25 06:02:10.559512 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.559523 | orchestrator | 2026-03-25 06:02:10.559534 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 06:02:10.559544 | orchestrator | Wednesday 25 March 2026 06:02:03 +0000 (0:00:01.137) 0:54:20.517 ******* 2026-03-25 06:02:10.559555 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.559566 | orchestrator | 2026-03-25 06:02:10.559577 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 06:02:10.559588 | orchestrator | Wednesday 25 March 2026 06:02:04 +0000 (0:00:01.159) 0:54:21.676 ******* 2026-03-25 06:02:10.559598 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.559609 | orchestrator | 2026-03-25 06:02:10.559620 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 06:02:10.559630 | orchestrator | Wednesday 25 March 2026 06:02:05 +0000 (0:00:01.151) 0:54:22.828 ******* 2026-03-25 06:02:10.559641 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.559652 | orchestrator | 2026-03-25 06:02:10.559668 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 06:02:10.559679 | orchestrator | Wednesday 25 March 2026 06:02:06 +0000 (0:00:01.133) 0:54:23.961 ******* 2026-03-25 06:02:10.559690 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.559701 | orchestrator | 2026-03-25 06:02:10.559712 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 06:02:10.559723 | orchestrator | Wednesday 25 March 2026 06:02:08 +0000 (0:00:01.150) 0:54:25.112 ******* 2026-03-25 06:02:10.559734 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.559745 | orchestrator | 2026-03-25 06:02:10.559756 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 06:02:10.559767 | orchestrator | Wednesday 25 March 2026 06:02:09 +0000 (0:00:01.253) 0:54:26.365 ******* 2026-03-25 06:02:10.559778 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:02:10.559789 | orchestrator | 2026-03-25 06:02:10.559806 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 06:03:07.679336 | orchestrator | Wednesday 25 March 2026 06:02:10 +0000 (0:00:01.195) 0:54:27.561 ******* 2026-03-25 06:03:07.679453 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.679468 | orchestrator | 2026-03-25 06:03:07.679482 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 06:03:07.679493 | orchestrator | Wednesday 25 March 2026 06:02:11 +0000 (0:00:01.133) 0:54:28.694 ******* 2026-03-25 06:03:07.679504 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.679516 | orchestrator | 2026-03-25 06:03:07.679527 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 06:03:07.679538 | orchestrator | Wednesday 25 March 2026 06:02:12 +0000 (0:00:01.233) 0:54:29.928 ******* 2026-03-25 06:03:07.679549 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-25 06:03:07.679560 | orchestrator | 2026-03-25 06:03:07.679570 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 06:03:07.679605 | orchestrator | Wednesday 25 March 2026 06:02:17 +0000 (0:00:04.814) 0:54:34.742 ******* 2026-03-25 06:03:07.679617 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:03:07.679630 | orchestrator | 2026-03-25 06:03:07.679641 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 06:03:07.679652 | orchestrator | Wednesday 25 March 2026 06:02:18 +0000 (0:00:01.249) 0:54:35.991 ******* 2026-03-25 06:03:07.679665 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-25 06:03:07.679679 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-25 06:03:07.679692 | orchestrator | 2026-03-25 06:03:07.679703 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 06:03:07.679714 | orchestrator | Wednesday 25 March 2026 06:02:23 +0000 (0:00:04.935) 0:54:40.927 ******* 2026-03-25 06:03:07.679725 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.679736 | orchestrator | 2026-03-25 06:03:07.679746 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 06:03:07.679757 | orchestrator | Wednesday 25 March 2026 06:02:25 +0000 (0:00:01.131) 0:54:42.058 ******* 2026-03-25 06:03:07.679768 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.679779 | orchestrator | 2026-03-25 06:03:07.679790 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:03:07.679801 | orchestrator | Wednesday 25 March 2026 06:02:26 +0000 (0:00:01.268) 0:54:43.327 ******* 2026-03-25 06:03:07.679811 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.679822 | orchestrator | 2026-03-25 06:03:07.679866 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:03:07.679882 | orchestrator | Wednesday 25 March 2026 06:02:27 +0000 (0:00:01.223) 0:54:44.551 ******* 2026-03-25 06:03:07.679896 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.679909 | orchestrator | 2026-03-25 06:03:07.679923 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:03:07.679936 | orchestrator | Wednesday 25 March 2026 06:02:28 +0000 (0:00:01.151) 0:54:45.702 ******* 2026-03-25 06:03:07.679948 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.679961 | orchestrator | 2026-03-25 06:03:07.679974 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:03:07.679987 | orchestrator | Wednesday 25 March 2026 06:02:29 +0000 (0:00:01.172) 0:54:46.875 ******* 2026-03-25 06:03:07.679999 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.680013 | orchestrator | 2026-03-25 06:03:07.680028 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:03:07.680042 | orchestrator | Wednesday 25 March 2026 06:02:31 +0000 (0:00:01.323) 0:54:48.198 ******* 2026-03-25 06:03:07.680054 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:03:07.680068 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:03:07.680080 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:03:07.680093 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.680105 | orchestrator | 2026-03-25 06:03:07.680134 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:03:07.680147 | orchestrator | Wednesday 25 March 2026 06:02:32 +0000 (0:00:01.385) 0:54:49.583 ******* 2026-03-25 06:03:07.680168 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:03:07.680180 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:03:07.680194 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:03:07.680207 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.680219 | orchestrator | 2026-03-25 06:03:07.680231 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:03:07.680242 | orchestrator | Wednesday 25 March 2026 06:02:33 +0000 (0:00:01.398) 0:54:50.982 ******* 2026-03-25 06:03:07.680252 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:03:07.680263 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:03:07.680274 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:03:07.680300 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.680312 | orchestrator | 2026-03-25 06:03:07.680323 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:03:07.680334 | orchestrator | Wednesday 25 March 2026 06:02:35 +0000 (0:00:01.888) 0:54:52.870 ******* 2026-03-25 06:03:07.680345 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.680356 | orchestrator | 2026-03-25 06:03:07.680367 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:03:07.680378 | orchestrator | Wednesday 25 March 2026 06:02:37 +0000 (0:00:01.179) 0:54:54.050 ******* 2026-03-25 06:03:07.680388 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 06:03:07.680399 | orchestrator | 2026-03-25 06:03:07.680410 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 06:03:07.680421 | orchestrator | Wednesday 25 March 2026 06:02:38 +0000 (0:00:01.952) 0:54:56.003 ******* 2026-03-25 06:03:07.680432 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.680444 | orchestrator | 2026-03-25 06:03:07.680455 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-25 06:03:07.680465 | orchestrator | Wednesday 25 March 2026 06:02:40 +0000 (0:00:01.808) 0:54:57.812 ******* 2026-03-25 06:03:07.680476 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.680487 | orchestrator | 2026-03-25 06:03:07.680498 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-25 06:03:07.680509 | orchestrator | Wednesday 25 March 2026 06:02:41 +0000 (0:00:01.131) 0:54:58.943 ******* 2026-03-25 06:03:07.680520 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4 2026-03-25 06:03:07.680530 | orchestrator | 2026-03-25 06:03:07.680541 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-25 06:03:07.680552 | orchestrator | Wednesday 25 March 2026 06:02:43 +0000 (0:00:01.478) 0:55:00.421 ******* 2026-03-25 06:03:07.680563 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-25 06:03:07.680574 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-25 06:03:07.680585 | orchestrator | 2026-03-25 06:03:07.680596 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-25 06:03:07.680692 | orchestrator | Wednesday 25 March 2026 06:02:45 +0000 (0:00:01.883) 0:55:02.305 ******* 2026-03-25 06:03:07.680704 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:03:07.680715 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-25 06:03:07.680726 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 06:03:07.680737 | orchestrator | 2026-03-25 06:03:07.680748 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-25 06:03:07.680759 | orchestrator | Wednesday 25 March 2026 06:02:48 +0000 (0:00:03.228) 0:55:05.533 ******* 2026-03-25 06:03:07.680770 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-25 06:03:07.680781 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-25 06:03:07.680792 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.680802 | orchestrator | 2026-03-25 06:03:07.680813 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-25 06:03:07.680863 | orchestrator | Wednesday 25 March 2026 06:02:50 +0000 (0:00:01.977) 0:55:07.511 ******* 2026-03-25 06:03:07.680875 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.680886 | orchestrator | 2026-03-25 06:03:07.680897 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-25 06:03:07.680908 | orchestrator | Wednesday 25 March 2026 06:02:52 +0000 (0:00:01.533) 0:55:09.045 ******* 2026-03-25 06:03:07.680919 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:07.680930 | orchestrator | 2026-03-25 06:03:07.680941 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-25 06:03:07.680951 | orchestrator | Wednesday 25 March 2026 06:02:53 +0000 (0:00:01.116) 0:55:10.161 ******* 2026-03-25 06:03:07.680962 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4 2026-03-25 06:03:07.680974 | orchestrator | 2026-03-25 06:03:07.680985 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-25 06:03:07.680996 | orchestrator | Wednesday 25 March 2026 06:02:54 +0000 (0:00:01.489) 0:55:11.651 ******* 2026-03-25 06:03:07.681007 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4 2026-03-25 06:03:07.681018 | orchestrator | 2026-03-25 06:03:07.681028 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-25 06:03:07.681039 | orchestrator | Wednesday 25 March 2026 06:02:56 +0000 (0:00:01.718) 0:55:13.369 ******* 2026-03-25 06:03:07.681050 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.681061 | orchestrator | 2026-03-25 06:03:07.681072 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-25 06:03:07.681082 | orchestrator | Wednesday 25 March 2026 06:02:58 +0000 (0:00:02.090) 0:55:15.460 ******* 2026-03-25 06:03:07.681093 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.681104 | orchestrator | 2026-03-25 06:03:07.681115 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-25 06:03:07.681132 | orchestrator | Wednesday 25 March 2026 06:03:00 +0000 (0:00:01.965) 0:55:17.425 ******* 2026-03-25 06:03:07.681143 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.681154 | orchestrator | 2026-03-25 06:03:07.681165 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-25 06:03:07.681176 | orchestrator | Wednesday 25 March 2026 06:03:02 +0000 (0:00:02.189) 0:55:19.614 ******* 2026-03-25 06:03:07.681187 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.681197 | orchestrator | 2026-03-25 06:03:07.681208 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-25 06:03:07.681219 | orchestrator | Wednesday 25 March 2026 06:03:04 +0000 (0:00:02.282) 0:55:21.897 ******* 2026-03-25 06:03:07.681230 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:07.681240 | orchestrator | 2026-03-25 06:03:07.681251 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-03-25 06:03:07.681262 | orchestrator | Wednesday 25 March 2026 06:03:06 +0000 (0:00:01.643) 0:55:23.540 ******* 2026-03-25 06:03:07.681282 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:03:38.796120 | orchestrator | 2026-03-25 06:03:38.796262 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-03-25 06:03:38.796293 | orchestrator | Wednesday 25 March 2026 06:03:07 +0000 (0:00:01.145) 0:55:24.686 ******* 2026-03-25 06:03:38.796314 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:03:38.796334 | orchestrator | 2026-03-25 06:03:38.796354 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-03-25 06:03:38.796374 | orchestrator | 2026-03-25 06:03:38.796394 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 06:03:38.796416 | orchestrator | Wednesday 25 March 2026 06:03:13 +0000 (0:00:05.674) 0:55:30.361 ******* 2026-03-25 06:03:38.796430 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-5 2026-03-25 06:03:38.796441 | orchestrator | 2026-03-25 06:03:38.796452 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 06:03:38.796487 | orchestrator | Wednesday 25 March 2026 06:03:14 +0000 (0:00:01.488) 0:55:31.849 ******* 2026-03-25 06:03:38.796498 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:38.796509 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:38.796520 | orchestrator | 2026-03-25 06:03:38.796530 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 06:03:38.796541 | orchestrator | Wednesday 25 March 2026 06:03:16 +0000 (0:00:01.640) 0:55:33.490 ******* 2026-03-25 06:03:38.796552 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:38.796563 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:38.796574 | orchestrator | 2026-03-25 06:03:38.796585 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 06:03:38.796595 | orchestrator | Wednesday 25 March 2026 06:03:17 +0000 (0:00:01.291) 0:55:34.781 ******* 2026-03-25 06:03:38.796606 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:38.796616 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:38.796627 | orchestrator | 2026-03-25 06:03:38.796637 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 06:03:38.796648 | orchestrator | Wednesday 25 March 2026 06:03:19 +0000 (0:00:01.624) 0:55:36.405 ******* 2026-03-25 06:03:38.796661 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:38.796673 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:38.796685 | orchestrator | 2026-03-25 06:03:38.796698 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 06:03:38.796711 | orchestrator | Wednesday 25 March 2026 06:03:20 +0000 (0:00:01.254) 0:55:37.660 ******* 2026-03-25 06:03:38.796723 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:38.796735 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:38.796747 | orchestrator | 2026-03-25 06:03:38.796759 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 06:03:38.796772 | orchestrator | Wednesday 25 March 2026 06:03:21 +0000 (0:00:01.261) 0:55:38.921 ******* 2026-03-25 06:03:38.796784 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:38.796797 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:38.796809 | orchestrator | 2026-03-25 06:03:38.796821 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 06:03:38.796867 | orchestrator | Wednesday 25 March 2026 06:03:23 +0000 (0:00:01.710) 0:55:40.632 ******* 2026-03-25 06:03:38.796881 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:38.796895 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:03:38.796907 | orchestrator | 2026-03-25 06:03:38.796919 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 06:03:38.796931 | orchestrator | Wednesday 25 March 2026 06:03:25 +0000 (0:00:01.395) 0:55:42.027 ******* 2026-03-25 06:03:38.796944 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:38.796957 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:38.796970 | orchestrator | 2026-03-25 06:03:38.796981 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 06:03:38.796993 | orchestrator | Wednesday 25 March 2026 06:03:26 +0000 (0:00:01.258) 0:55:43.285 ******* 2026-03-25 06:03:38.797006 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:03:38.797019 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:03:38.797030 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:03:38.797041 | orchestrator | 2026-03-25 06:03:38.797051 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 06:03:38.797062 | orchestrator | Wednesday 25 March 2026 06:03:28 +0000 (0:00:01.795) 0:55:45.080 ******* 2026-03-25 06:03:38.797072 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:38.797083 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:38.797093 | orchestrator | 2026-03-25 06:03:38.797104 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 06:03:38.797115 | orchestrator | Wednesday 25 March 2026 06:03:29 +0000 (0:00:01.444) 0:55:46.524 ******* 2026-03-25 06:03:38.797134 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:03:38.797145 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:03:38.797172 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:03:38.797183 | orchestrator | 2026-03-25 06:03:38.797194 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 06:03:38.797204 | orchestrator | Wednesday 25 March 2026 06:03:32 +0000 (0:00:03.281) 0:55:49.806 ******* 2026-03-25 06:03:38.797215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 06:03:38.797227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 06:03:38.797237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 06:03:38.797248 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:38.797259 | orchestrator | 2026-03-25 06:03:38.797304 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 06:03:38.797316 | orchestrator | Wednesday 25 March 2026 06:03:34 +0000 (0:00:01.489) 0:55:51.296 ******* 2026-03-25 06:03:38.797371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 06:03:38.797387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 06:03:38.797398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 06:03:38.797409 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:38.797420 | orchestrator | 2026-03-25 06:03:38.797431 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 06:03:38.797442 | orchestrator | Wednesday 25 March 2026 06:03:36 +0000 (0:00:02.029) 0:55:53.325 ******* 2026-03-25 06:03:38.797455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:03:38.797469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:03:38.797480 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:03:38.797492 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:38.797502 | orchestrator | 2026-03-25 06:03:38.797514 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 06:03:38.797525 | orchestrator | Wednesday 25 March 2026 06:03:37 +0000 (0:00:01.176) 0:55:54.502 ******* 2026-03-25 06:03:38.797538 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 06:03:29.986073', 'end': '2026-03-25 06:03:30.032325', 'delta': '0:00:00.046252', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 06:03:38.797566 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 06:03:30.567890', 'end': '2026-03-25 06:03:30.618336', 'delta': '0:00:00.050446', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 06:03:38.797587 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 06:03:31.489559', 'end': '2026-03-25 06:03:31.533223', 'delta': '0:00:00.043664', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 06:03:58.494532 | orchestrator | 2026-03-25 06:03:58.494652 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 06:03:58.494699 | orchestrator | Wednesday 25 March 2026 06:03:38 +0000 (0:00:01.293) 0:55:55.796 ******* 2026-03-25 06:03:58.494711 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:58.494724 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:58.494735 | orchestrator | 2026-03-25 06:03:58.494746 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 06:03:58.494757 | orchestrator | Wednesday 25 March 2026 06:03:40 +0000 (0:00:01.514) 0:55:57.310 ******* 2026-03-25 06:03:58.494768 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:58.494781 | orchestrator | 2026-03-25 06:03:58.494792 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 06:03:58.494803 | orchestrator | Wednesday 25 March 2026 06:03:41 +0000 (0:00:01.302) 0:55:58.613 ******* 2026-03-25 06:03:58.494814 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:58.494825 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:58.494885 | orchestrator | 2026-03-25 06:03:58.494896 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 06:03:58.494907 | orchestrator | Wednesday 25 March 2026 06:03:42 +0000 (0:00:01.270) 0:55:59.884 ******* 2026-03-25 06:03:58.494918 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 06:03:58.494929 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-25 06:03:58.494940 | orchestrator | 2026-03-25 06:03:58.494951 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 06:03:58.494962 | orchestrator | Wednesday 25 March 2026 06:03:45 +0000 (0:00:02.387) 0:56:02.272 ******* 2026-03-25 06:03:58.494972 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:58.495035 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:58.495047 | orchestrator | 2026-03-25 06:03:58.495058 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 06:03:58.495071 | orchestrator | Wednesday 25 March 2026 06:03:46 +0000 (0:00:01.297) 0:56:03.569 ******* 2026-03-25 06:03:58.495083 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:58.495096 | orchestrator | 2026-03-25 06:03:58.495109 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 06:03:58.495121 | orchestrator | Wednesday 25 March 2026 06:03:47 +0000 (0:00:01.135) 0:56:04.705 ******* 2026-03-25 06:03:58.495134 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:58.495147 | orchestrator | 2026-03-25 06:03:58.495160 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 06:03:58.495172 | orchestrator | Wednesday 25 March 2026 06:03:48 +0000 (0:00:01.267) 0:56:05.973 ******* 2026-03-25 06:03:58.495184 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:58.495211 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:03:58.495224 | orchestrator | 2026-03-25 06:03:58.495236 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 06:03:58.495249 | orchestrator | Wednesday 25 March 2026 06:03:50 +0000 (0:00:01.385) 0:56:07.359 ******* 2026-03-25 06:03:58.495261 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:58.495273 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:03:58.495286 | orchestrator | 2026-03-25 06:03:58.495298 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 06:03:58.495310 | orchestrator | Wednesday 25 March 2026 06:03:51 +0000 (0:00:01.240) 0:56:08.600 ******* 2026-03-25 06:03:58.495323 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:58.495335 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:58.495347 | orchestrator | 2026-03-25 06:03:58.495360 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 06:03:58.495383 | orchestrator | Wednesday 25 March 2026 06:03:52 +0000 (0:00:01.264) 0:56:09.864 ******* 2026-03-25 06:03:58.495396 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:58.495409 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:03:58.495422 | orchestrator | 2026-03-25 06:03:58.495433 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 06:03:58.495444 | orchestrator | Wednesday 25 March 2026 06:03:54 +0000 (0:00:01.286) 0:56:11.151 ******* 2026-03-25 06:03:58.495455 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:58.495465 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:58.495476 | orchestrator | 2026-03-25 06:03:58.495498 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 06:03:58.495524 | orchestrator | Wednesday 25 March 2026 06:03:55 +0000 (0:00:01.294) 0:56:12.445 ******* 2026-03-25 06:03:58.495535 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:58.495546 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:03:58.495557 | orchestrator | 2026-03-25 06:03:58.495568 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 06:03:58.495579 | orchestrator | Wednesday 25 March 2026 06:03:56 +0000 (0:00:01.254) 0:56:13.699 ******* 2026-03-25 06:03:58.495590 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:03:58.495601 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:03:58.495611 | orchestrator | 2026-03-25 06:03:58.495622 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 06:03:58.495633 | orchestrator | Wednesday 25 March 2026 06:03:57 +0000 (0:00:01.274) 0:56:14.974 ******* 2026-03-25 06:03:58.495646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.495680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'uuids': ['a582f89c-a8ac-4a87-8a0b-f7c0ca2abef4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8']}})  2026-03-25 06:03:58.495704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99e65ea9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:03:58.495718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f']}})  2026-03-25 06:03:58.495730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.495742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.495759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-42-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 06:03:58.495771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.495797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo', 'dm-uuid-CRYPT-LUKS2-10d41a0c964d43008e142cbf5f4d58c4-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:03:58.640160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.640265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'uuids': ['10d41a0c-964d-4300-8e14-2cbf5f4d58c4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo']}})  2026-03-25 06:03:58.640283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e']}})  2026-03-25 06:03:58.640296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.640349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5418d243', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:03:58.640387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.640400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.640412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8', 'dm-uuid-CRYPT-LUKS2-a582f89ca8ac4a878a0bf7c0ca2abef4-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:03:58.640425 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:03:58.640438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.640455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'uuids': ['e67f6cc7-d6f8-4138-9e65-f811c858cad0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI']}})  2026-03-25 06:03:58.640467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82545a3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:03:58.640495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060']}})  2026-03-25 06:03:58.766419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.766505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.766514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 06:03:58.766522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.766540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X', 'dm-uuid-CRYPT-LUKS2-306c9f3fcb174ac6ad8e271da2bf30e2-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:03:58.766546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.766568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'uuids': ['306c9f3f-cb17-4ac6-ad8e-271da2bf30e2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X']}})  2026-03-25 06:03:58.766589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269']}})  2026-03-25 06:03:58.766596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.766609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0ceb4511', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:03:58.766621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.766627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:03:58.766637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI', 'dm-uuid-CRYPT-LUKS2-e67f6cc7d6f841389e65f811c858cad0-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:04:00.193082 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:04:00.193184 | orchestrator | 2026-03-25 06:04:00.193200 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 06:04:00.193212 | orchestrator | Wednesday 25 March 2026 06:03:59 +0000 (0:00:01.961) 0:56:16.935 ******* 2026-03-25 06:04:00.193227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'uuids': ['a582f89c-a8ac-4a87-8a0b-f7c0ca2abef4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99e65ea9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'uuids': ['e67f6cc7-d6f8-4138-9e65-f811c858cad0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193400 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82545a3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193460 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.193514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-42-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254698 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254787 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254893 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo', 'dm-uuid-CRYPT-LUKS2-10d41a0c964d43008e142cbf5f4d58c4-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254905 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X', 'dm-uuid-CRYPT-LUKS2-306c9f3fcb174ac6ad8e271da2bf30e2-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'uuids': ['10d41a0c-964d-4300-8e14-2cbf5f4d58c4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.254988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.255001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.255023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'uuids': ['306c9f3f-cb17-4ac6-ad8e-271da2bf30e2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.313235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.313403 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.313434 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.313467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5418d243', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.313495 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0ceb4511', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.313506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.313517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:00.313535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:30.259301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:30.259420 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI', 'dm-uuid-CRYPT-LUKS2-e67f6cc7d6f841389e65f811c858cad0-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:30.259438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8', 'dm-uuid-CRYPT-LUKS2-a582f89ca8ac4a878a0bf7c0ca2abef4-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:04:30.259452 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:04:30.259465 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.259477 | orchestrator | 2026-03-25 06:04:30.259489 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 06:04:30.259502 | orchestrator | Wednesday 25 March 2026 06:04:01 +0000 (0:00:01.576) 0:56:18.511 ******* 2026-03-25 06:04:30.259513 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:04:30.259524 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:04:30.259535 | orchestrator | 2026-03-25 06:04:30.259546 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 06:04:30.259556 | orchestrator | Wednesday 25 March 2026 06:04:03 +0000 (0:00:01.639) 0:56:20.151 ******* 2026-03-25 06:04:30.259567 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:04:30.259578 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:04:30.259588 | orchestrator | 2026-03-25 06:04:30.259599 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:04:30.259610 | orchestrator | Wednesday 25 March 2026 06:04:04 +0000 (0:00:01.265) 0:56:21.416 ******* 2026-03-25 06:04:30.259621 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:04:30.259631 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:04:30.259642 | orchestrator | 2026-03-25 06:04:30.259674 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:04:30.259686 | orchestrator | Wednesday 25 March 2026 06:04:06 +0000 (0:00:01.609) 0:56:23.026 ******* 2026-03-25 06:04:30.259697 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.259708 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:04:30.259718 | orchestrator | 2026-03-25 06:04:30.259729 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:04:30.259739 | orchestrator | Wednesday 25 March 2026 06:04:07 +0000 (0:00:01.222) 0:56:24.248 ******* 2026-03-25 06:04:30.259750 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.259760 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:04:30.259771 | orchestrator | 2026-03-25 06:04:30.259782 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:04:30.259792 | orchestrator | Wednesday 25 March 2026 06:04:09 +0000 (0:00:01.808) 0:56:26.056 ******* 2026-03-25 06:04:30.259805 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.259818 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:04:30.259893 | orchestrator | 2026-03-25 06:04:30.259907 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 06:04:30.259921 | orchestrator | Wednesday 25 March 2026 06:04:10 +0000 (0:00:01.311) 0:56:27.367 ******* 2026-03-25 06:04:30.259932 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-25 06:04:30.259943 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-25 06:04:30.259954 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-25 06:04:30.259964 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-25 06:04:30.259975 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-25 06:04:30.260003 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-25 06:04:30.260014 | orchestrator | 2026-03-25 06:04:30.260025 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 06:04:30.260035 | orchestrator | Wednesday 25 March 2026 06:04:12 +0000 (0:00:01.835) 0:56:29.203 ******* 2026-03-25 06:04:30.260046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 06:04:30.260057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 06:04:30.260103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 06:04:30.260125 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.260136 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-25 06:04:30.260155 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-25 06:04:30.260166 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-25 06:04:30.260176 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:04:30.260199 | orchestrator | 2026-03-25 06:04:30.260210 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 06:04:30.260221 | orchestrator | Wednesday 25 March 2026 06:04:13 +0000 (0:00:01.357) 0:56:30.561 ******* 2026-03-25 06:04:30.260232 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-5 2026-03-25 06:04:30.260244 | orchestrator | 2026-03-25 06:04:30.260254 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:04:30.260266 | orchestrator | Wednesday 25 March 2026 06:04:14 +0000 (0:00:01.293) 0:56:31.855 ******* 2026-03-25 06:04:30.260277 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.260287 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:04:30.260298 | orchestrator | 2026-03-25 06:04:30.260309 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:04:30.260319 | orchestrator | Wednesday 25 March 2026 06:04:16 +0000 (0:00:01.299) 0:56:33.155 ******* 2026-03-25 06:04:30.260330 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.260340 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:04:30.260351 | orchestrator | 2026-03-25 06:04:30.260362 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:04:30.260382 | orchestrator | Wednesday 25 March 2026 06:04:17 +0000 (0:00:01.629) 0:56:34.785 ******* 2026-03-25 06:04:30.260393 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.260404 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:04:30.260414 | orchestrator | 2026-03-25 06:04:30.260425 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:04:30.260435 | orchestrator | Wednesday 25 March 2026 06:04:19 +0000 (0:00:01.273) 0:56:36.058 ******* 2026-03-25 06:04:30.260446 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:04:30.260457 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:04:30.260467 | orchestrator | 2026-03-25 06:04:30.260478 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:04:30.260488 | orchestrator | Wednesday 25 March 2026 06:04:20 +0000 (0:00:01.466) 0:56:37.525 ******* 2026-03-25 06:04:30.260499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:04:30.260509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:04:30.260520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:04:30.260530 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.260541 | orchestrator | 2026-03-25 06:04:30.260552 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:04:30.260562 | orchestrator | Wednesday 25 March 2026 06:04:21 +0000 (0:00:01.480) 0:56:39.006 ******* 2026-03-25 06:04:30.260573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:04:30.260583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:04:30.260594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:04:30.260605 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.260615 | orchestrator | 2026-03-25 06:04:30.260626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:04:30.260636 | orchestrator | Wednesday 25 March 2026 06:04:23 +0000 (0:00:01.416) 0:56:40.422 ******* 2026-03-25 06:04:30.260647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:04:30.260658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:04:30.260668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:04:30.260679 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:04:30.260689 | orchestrator | 2026-03-25 06:04:30.260700 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:04:30.260710 | orchestrator | Wednesday 25 March 2026 06:04:24 +0000 (0:00:01.419) 0:56:41.842 ******* 2026-03-25 06:04:30.260721 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:04:30.260732 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:04:30.260742 | orchestrator | 2026-03-25 06:04:30.260753 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:04:30.260764 | orchestrator | Wednesday 25 March 2026 06:04:26 +0000 (0:00:01.287) 0:56:43.130 ******* 2026-03-25 06:04:30.260774 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 06:04:30.260785 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 06:04:30.260795 | orchestrator | 2026-03-25 06:04:30.260806 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 06:04:30.260816 | orchestrator | Wednesday 25 March 2026 06:04:27 +0000 (0:00:01.830) 0:56:44.960 ******* 2026-03-25 06:04:30.260827 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:04:30.260856 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:04:30.260867 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:04:30.260877 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 06:04:30.260894 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 06:05:13.387997 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 06:05:13.388163 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:05:13.388181 | orchestrator | 2026-03-25 06:05:13.388193 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 06:05:13.388206 | orchestrator | Wednesday 25 March 2026 06:04:30 +0000 (0:00:02.296) 0:56:47.257 ******* 2026-03-25 06:05:13.388232 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:05:13.388243 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:05:13.388254 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:05:13.388266 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 06:05:13.388277 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 06:05:13.388288 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 06:05:13.388298 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:05:13.388309 | orchestrator | 2026-03-25 06:05:13.388320 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-03-25 06:05:13.388331 | orchestrator | Wednesday 25 March 2026 06:04:32 +0000 (0:00:02.652) 0:56:49.910 ******* 2026-03-25 06:05:13.388342 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.388353 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.388364 | orchestrator | 2026-03-25 06:05:13.388375 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 06:05:13.388385 | orchestrator | Wednesday 25 March 2026 06:04:34 +0000 (0:00:01.256) 0:56:51.167 ******* 2026-03-25 06:05:13.388396 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-5 2026-03-25 06:05:13.388407 | orchestrator | 2026-03-25 06:05:13.388418 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 06:05:13.388429 | orchestrator | Wednesday 25 March 2026 06:04:35 +0000 (0:00:01.240) 0:56:52.408 ******* 2026-03-25 06:05:13.388440 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-5 2026-03-25 06:05:13.388451 | orchestrator | 2026-03-25 06:05:13.388462 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 06:05:13.388473 | orchestrator | Wednesday 25 March 2026 06:04:36 +0000 (0:00:01.234) 0:56:53.642 ******* 2026-03-25 06:05:13.388484 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.388497 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.388509 | orchestrator | 2026-03-25 06:05:13.388522 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 06:05:13.388534 | orchestrator | Wednesday 25 March 2026 06:04:38 +0000 (0:00:01.575) 0:56:55.218 ******* 2026-03-25 06:05:13.388547 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.388560 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.388572 | orchestrator | 2026-03-25 06:05:13.388584 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 06:05:13.388597 | orchestrator | Wednesday 25 March 2026 06:04:39 +0000 (0:00:01.643) 0:56:56.861 ******* 2026-03-25 06:05:13.388610 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.388622 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.388635 | orchestrator | 2026-03-25 06:05:13.388647 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 06:05:13.388659 | orchestrator | Wednesday 25 March 2026 06:04:41 +0000 (0:00:01.595) 0:56:58.457 ******* 2026-03-25 06:05:13.388672 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.388684 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.388696 | orchestrator | 2026-03-25 06:05:13.388709 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 06:05:13.388721 | orchestrator | Wednesday 25 March 2026 06:04:43 +0000 (0:00:01.665) 0:57:00.122 ******* 2026-03-25 06:05:13.388742 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.388755 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.388767 | orchestrator | 2026-03-25 06:05:13.388780 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 06:05:13.388793 | orchestrator | Wednesday 25 March 2026 06:04:44 +0000 (0:00:01.248) 0:57:01.370 ******* 2026-03-25 06:05:13.388805 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.388819 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.388878 | orchestrator | 2026-03-25 06:05:13.388889 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 06:05:13.388900 | orchestrator | Wednesday 25 March 2026 06:04:45 +0000 (0:00:01.259) 0:57:02.630 ******* 2026-03-25 06:05:13.388911 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.388922 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.388933 | orchestrator | 2026-03-25 06:05:13.388943 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 06:05:13.388954 | orchestrator | Wednesday 25 March 2026 06:04:46 +0000 (0:00:01.289) 0:57:03.920 ******* 2026-03-25 06:05:13.388965 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.388976 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.388987 | orchestrator | 2026-03-25 06:05:13.388998 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 06:05:13.389008 | orchestrator | Wednesday 25 March 2026 06:04:48 +0000 (0:00:01.671) 0:57:05.592 ******* 2026-03-25 06:05:13.389019 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.389030 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.389040 | orchestrator | 2026-03-25 06:05:13.389051 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 06:05:13.389062 | orchestrator | Wednesday 25 March 2026 06:04:50 +0000 (0:00:01.613) 0:57:07.205 ******* 2026-03-25 06:05:13.389073 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.389084 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.389095 | orchestrator | 2026-03-25 06:05:13.389122 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 06:05:13.389134 | orchestrator | Wednesday 25 March 2026 06:04:51 +0000 (0:00:01.250) 0:57:08.456 ******* 2026-03-25 06:05:13.389145 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.389155 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.389166 | orchestrator | 2026-03-25 06:05:13.389177 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 06:05:13.389188 | orchestrator | Wednesday 25 March 2026 06:04:52 +0000 (0:00:01.255) 0:57:09.711 ******* 2026-03-25 06:05:13.389283 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.389304 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.389315 | orchestrator | 2026-03-25 06:05:13.389326 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 06:05:13.389337 | orchestrator | Wednesday 25 March 2026 06:04:53 +0000 (0:00:01.225) 0:57:10.937 ******* 2026-03-25 06:05:13.389348 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.389359 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.389370 | orchestrator | 2026-03-25 06:05:13.389381 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 06:05:13.389391 | orchestrator | Wednesday 25 March 2026 06:04:55 +0000 (0:00:01.330) 0:57:12.267 ******* 2026-03-25 06:05:13.389402 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.389413 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.389423 | orchestrator | 2026-03-25 06:05:13.389434 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 06:05:13.389445 | orchestrator | Wednesday 25 March 2026 06:04:56 +0000 (0:00:01.713) 0:57:13.980 ******* 2026-03-25 06:05:13.389455 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.389466 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.389477 | orchestrator | 2026-03-25 06:05:13.389487 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 06:05:13.389506 | orchestrator | Wednesday 25 March 2026 06:04:58 +0000 (0:00:01.237) 0:57:15.218 ******* 2026-03-25 06:05:13.389517 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.389528 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.389539 | orchestrator | 2026-03-25 06:05:13.389550 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 06:05:13.389560 | orchestrator | Wednesday 25 March 2026 06:04:59 +0000 (0:00:01.230) 0:57:16.449 ******* 2026-03-25 06:05:13.389571 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.389582 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.389592 | orchestrator | 2026-03-25 06:05:13.389603 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 06:05:13.389614 | orchestrator | Wednesday 25 March 2026 06:05:00 +0000 (0:00:01.288) 0:57:17.738 ******* 2026-03-25 06:05:13.389624 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.389635 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.389646 | orchestrator | 2026-03-25 06:05:13.389657 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 06:05:13.389668 | orchestrator | Wednesday 25 March 2026 06:05:02 +0000 (0:00:01.298) 0:57:19.036 ******* 2026-03-25 06:05:13.389678 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:13.389689 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:13.389700 | orchestrator | 2026-03-25 06:05:13.389711 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 06:05:13.389721 | orchestrator | Wednesday 25 March 2026 06:05:03 +0000 (0:00:01.256) 0:57:20.293 ******* 2026-03-25 06:05:13.389732 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.389743 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.389754 | orchestrator | 2026-03-25 06:05:13.389765 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 06:05:13.389775 | orchestrator | Wednesday 25 March 2026 06:05:04 +0000 (0:00:01.611) 0:57:21.904 ******* 2026-03-25 06:05:13.389786 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.389797 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.389808 | orchestrator | 2026-03-25 06:05:13.389818 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 06:05:13.389866 | orchestrator | Wednesday 25 March 2026 06:05:06 +0000 (0:00:01.274) 0:57:23.178 ******* 2026-03-25 06:05:13.389884 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.389901 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.389918 | orchestrator | 2026-03-25 06:05:13.389936 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 06:05:13.389955 | orchestrator | Wednesday 25 March 2026 06:05:07 +0000 (0:00:01.225) 0:57:24.404 ******* 2026-03-25 06:05:13.389974 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.389992 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.390007 | orchestrator | 2026-03-25 06:05:13.390090 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 06:05:13.390104 | orchestrator | Wednesday 25 March 2026 06:05:08 +0000 (0:00:01.194) 0:57:25.599 ******* 2026-03-25 06:05:13.390115 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.390126 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.390136 | orchestrator | 2026-03-25 06:05:13.390147 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 06:05:13.390158 | orchestrator | Wednesday 25 March 2026 06:05:09 +0000 (0:00:01.177) 0:57:26.777 ******* 2026-03-25 06:05:13.390168 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.390179 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.390189 | orchestrator | 2026-03-25 06:05:13.390200 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 06:05:13.390210 | orchestrator | Wednesday 25 March 2026 06:05:10 +0000 (0:00:01.181) 0:57:27.959 ******* 2026-03-25 06:05:13.390221 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.390232 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.390253 | orchestrator | 2026-03-25 06:05:13.390264 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 06:05:13.390275 | orchestrator | Wednesday 25 March 2026 06:05:12 +0000 (0:00:01.200) 0:57:29.160 ******* 2026-03-25 06:05:13.390286 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:13.390296 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:13.390307 | orchestrator | 2026-03-25 06:05:13.390318 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 06:05:13.390340 | orchestrator | Wednesday 25 March 2026 06:05:13 +0000 (0:00:01.229) 0:57:30.390 ******* 2026-03-25 06:05:58.784451 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.784570 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.784586 | orchestrator | 2026-03-25 06:05:58.784597 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 06:05:58.784608 | orchestrator | Wednesday 25 March 2026 06:05:14 +0000 (0:00:01.358) 0:57:31.748 ******* 2026-03-25 06:05:58.784616 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.784625 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.784634 | orchestrator | 2026-03-25 06:05:58.784658 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 06:05:58.784673 | orchestrator | Wednesday 25 March 2026 06:05:15 +0000 (0:00:01.267) 0:57:33.015 ******* 2026-03-25 06:05:58.784688 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.784702 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.784716 | orchestrator | 2026-03-25 06:05:58.784731 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 06:05:58.784745 | orchestrator | Wednesday 25 March 2026 06:05:17 +0000 (0:00:01.256) 0:57:34.271 ******* 2026-03-25 06:05:58.784759 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.784773 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.784787 | orchestrator | 2026-03-25 06:05:58.784803 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 06:05:58.784818 | orchestrator | Wednesday 25 March 2026 06:05:18 +0000 (0:00:01.228) 0:57:35.500 ******* 2026-03-25 06:05:58.784894 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:58.784905 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:58.784914 | orchestrator | 2026-03-25 06:05:58.784923 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 06:05:58.784931 | orchestrator | Wednesday 25 March 2026 06:05:20 +0000 (0:00:02.448) 0:57:37.948 ******* 2026-03-25 06:05:58.784940 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:58.784949 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:58.784958 | orchestrator | 2026-03-25 06:05:58.784966 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 06:05:58.784975 | orchestrator | Wednesday 25 March 2026 06:05:23 +0000 (0:00:02.349) 0:57:40.297 ******* 2026-03-25 06:05:58.784986 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-5 2026-03-25 06:05:58.784996 | orchestrator | 2026-03-25 06:05:58.785006 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 06:05:58.785016 | orchestrator | Wednesday 25 March 2026 06:05:24 +0000 (0:00:01.385) 0:57:41.684 ******* 2026-03-25 06:05:58.785026 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785036 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785046 | orchestrator | 2026-03-25 06:05:58.785056 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 06:05:58.785066 | orchestrator | Wednesday 25 March 2026 06:05:25 +0000 (0:00:01.313) 0:57:42.997 ******* 2026-03-25 06:05:58.785075 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785086 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785096 | orchestrator | 2026-03-25 06:05:58.785105 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 06:05:58.785115 | orchestrator | Wednesday 25 March 2026 06:05:27 +0000 (0:00:01.273) 0:57:44.271 ******* 2026-03-25 06:05:58.785148 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 06:05:58.785159 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 06:05:58.785170 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 06:05:58.785179 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 06:05:58.785189 | orchestrator | 2026-03-25 06:05:58.785199 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 06:05:58.785208 | orchestrator | Wednesday 25 March 2026 06:05:29 +0000 (0:00:01.974) 0:57:46.245 ******* 2026-03-25 06:05:58.785218 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:58.785228 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:58.785238 | orchestrator | 2026-03-25 06:05:58.785253 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 06:05:58.785268 | orchestrator | Wednesday 25 March 2026 06:05:30 +0000 (0:00:01.533) 0:57:47.779 ******* 2026-03-25 06:05:58.785296 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785313 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785329 | orchestrator | 2026-03-25 06:05:58.785347 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 06:05:58.785365 | orchestrator | Wednesday 25 March 2026 06:05:32 +0000 (0:00:01.312) 0:57:49.092 ******* 2026-03-25 06:05:58.785374 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785383 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785391 | orchestrator | 2026-03-25 06:05:58.785400 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 06:05:58.785408 | orchestrator | Wednesday 25 March 2026 06:05:33 +0000 (0:00:01.251) 0:57:50.343 ******* 2026-03-25 06:05:58.785417 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785425 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785434 | orchestrator | 2026-03-25 06:05:58.785442 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 06:05:58.785451 | orchestrator | Wednesday 25 March 2026 06:05:34 +0000 (0:00:01.314) 0:57:51.658 ******* 2026-03-25 06:05:58.785464 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-5 2026-03-25 06:05:58.785479 | orchestrator | 2026-03-25 06:05:58.785493 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 06:05:58.785509 | orchestrator | Wednesday 25 March 2026 06:05:35 +0000 (0:00:01.270) 0:57:52.929 ******* 2026-03-25 06:05:58.785522 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:58.785538 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:58.785553 | orchestrator | 2026-03-25 06:05:58.785569 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 06:05:58.785606 | orchestrator | Wednesday 25 March 2026 06:05:38 +0000 (0:00:02.176) 0:57:55.105 ******* 2026-03-25 06:05:58.785617 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 06:05:58.785625 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 06:05:58.785634 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 06:05:58.785643 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785659 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 06:05:58.785667 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 06:05:58.785676 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 06:05:58.785685 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785693 | orchestrator | 2026-03-25 06:05:58.785702 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 06:05:58.785710 | orchestrator | Wednesday 25 March 2026 06:05:39 +0000 (0:00:01.278) 0:57:56.384 ******* 2026-03-25 06:05:58.785719 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785734 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785743 | orchestrator | 2026-03-25 06:05:58.785751 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 06:05:58.785760 | orchestrator | Wednesday 25 March 2026 06:05:40 +0000 (0:00:01.267) 0:57:57.652 ******* 2026-03-25 06:05:58.785768 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785777 | orchestrator | 2026-03-25 06:05:58.785785 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 06:05:58.785794 | orchestrator | Wednesday 25 March 2026 06:05:41 +0000 (0:00:01.171) 0:57:58.824 ******* 2026-03-25 06:05:58.785803 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785811 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785820 | orchestrator | 2026-03-25 06:05:58.785852 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 06:05:58.785862 | orchestrator | Wednesday 25 March 2026 06:05:43 +0000 (0:00:01.254) 0:58:00.078 ******* 2026-03-25 06:05:58.785871 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785880 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785888 | orchestrator | 2026-03-25 06:05:58.785897 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 06:05:58.785906 | orchestrator | Wednesday 25 March 2026 06:05:44 +0000 (0:00:01.442) 0:58:01.521 ******* 2026-03-25 06:05:58.785914 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.785923 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.785931 | orchestrator | 2026-03-25 06:05:58.785940 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 06:05:58.785949 | orchestrator | Wednesday 25 March 2026 06:05:45 +0000 (0:00:01.322) 0:58:02.844 ******* 2026-03-25 06:05:58.785958 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:58.785966 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:58.785975 | orchestrator | 2026-03-25 06:05:58.785983 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 06:05:58.785992 | orchestrator | Wednesday 25 March 2026 06:05:48 +0000 (0:00:02.604) 0:58:05.449 ******* 2026-03-25 06:05:58.786001 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:05:58.786009 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:05:58.786068 | orchestrator | 2026-03-25 06:05:58.786080 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 06:05:58.786089 | orchestrator | Wednesday 25 March 2026 06:05:49 +0000 (0:00:01.238) 0:58:06.688 ******* 2026-03-25 06:05:58.786098 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-5 2026-03-25 06:05:58.786107 | orchestrator | 2026-03-25 06:05:58.786116 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 06:05:58.786124 | orchestrator | Wednesday 25 March 2026 06:05:50 +0000 (0:00:01.276) 0:58:07.964 ******* 2026-03-25 06:05:58.786133 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.786141 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.786150 | orchestrator | 2026-03-25 06:05:58.786159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 06:05:58.786198 | orchestrator | Wednesday 25 March 2026 06:05:52 +0000 (0:00:01.240) 0:58:09.204 ******* 2026-03-25 06:05:58.786208 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.786216 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.786225 | orchestrator | 2026-03-25 06:05:58.786233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 06:05:58.786242 | orchestrator | Wednesday 25 March 2026 06:05:53 +0000 (0:00:01.252) 0:58:10.457 ******* 2026-03-25 06:05:58.786253 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.786268 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.786282 | orchestrator | 2026-03-25 06:05:58.786296 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 06:05:58.786310 | orchestrator | Wednesday 25 March 2026 06:05:54 +0000 (0:00:01.221) 0:58:11.679 ******* 2026-03-25 06:05:58.786337 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.786351 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.786365 | orchestrator | 2026-03-25 06:05:58.786374 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 06:05:58.786383 | orchestrator | Wednesday 25 March 2026 06:05:56 +0000 (0:00:01.627) 0:58:13.306 ******* 2026-03-25 06:05:58.786391 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.786400 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.786408 | orchestrator | 2026-03-25 06:05:58.786417 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 06:05:58.786426 | orchestrator | Wednesday 25 March 2026 06:05:57 +0000 (0:00:01.209) 0:58:14.516 ******* 2026-03-25 06:05:58.786434 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:05:58.786443 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:05:58.786451 | orchestrator | 2026-03-25 06:05:58.786460 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 06:05:58.786478 | orchestrator | Wednesday 25 March 2026 06:05:58 +0000 (0:00:01.271) 0:58:15.787 ******* 2026-03-25 06:06:42.331354 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.331470 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.331486 | orchestrator | 2026-03-25 06:06:42.331498 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 06:06:42.331511 | orchestrator | Wednesday 25 March 2026 06:06:00 +0000 (0:00:01.338) 0:58:17.126 ******* 2026-03-25 06:06:42.331522 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.331533 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.331544 | orchestrator | 2026-03-25 06:06:42.331571 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 06:06:42.331583 | orchestrator | Wednesday 25 March 2026 06:06:01 +0000 (0:00:01.302) 0:58:18.429 ******* 2026-03-25 06:06:42.331594 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:06:42.331605 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:06:42.331616 | orchestrator | 2026-03-25 06:06:42.331627 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 06:06:42.331638 | orchestrator | Wednesday 25 March 2026 06:06:02 +0000 (0:00:01.452) 0:58:19.882 ******* 2026-03-25 06:06:42.331650 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-5 2026-03-25 06:06:42.331661 | orchestrator | 2026-03-25 06:06:42.331672 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 06:06:42.331683 | orchestrator | Wednesday 25 March 2026 06:06:04 +0000 (0:00:01.213) 0:58:21.096 ******* 2026-03-25 06:06:42.331694 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-25 06:06:42.331705 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-25 06:06:42.331716 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-25 06:06:42.331727 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-25 06:06:42.331737 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-25 06:06:42.331748 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-25 06:06:42.331758 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-25 06:06:42.331769 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-25 06:06:42.331779 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-25 06:06:42.331790 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-25 06:06:42.331800 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-25 06:06:42.331811 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-25 06:06:42.331821 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-25 06:06:42.331866 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-25 06:06:42.331878 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-25 06:06:42.331889 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-25 06:06:42.331923 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 06:06:42.331936 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 06:06:42.331948 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 06:06:42.331960 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 06:06:42.331973 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 06:06:42.331985 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 06:06:42.331997 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 06:06:42.332009 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 06:06:42.332021 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 06:06:42.332035 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 06:06:42.332048 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 06:06:42.332060 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 06:06:42.332072 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-25 06:06:42.332085 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-25 06:06:42.332097 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-25 06:06:42.332109 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-25 06:06:42.332121 | orchestrator | 2026-03-25 06:06:42.332135 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 06:06:42.332148 | orchestrator | Wednesday 25 March 2026 06:06:10 +0000 (0:00:06.715) 0:58:27.811 ******* 2026-03-25 06:06:42.332160 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-5 2026-03-25 06:06:42.332172 | orchestrator | 2026-03-25 06:06:42.332185 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-25 06:06:42.332197 | orchestrator | Wednesday 25 March 2026 06:06:12 +0000 (0:00:01.273) 0:58:29.085 ******* 2026-03-25 06:06:42.332211 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:06:42.332224 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:06:42.332237 | orchestrator | 2026-03-25 06:06:42.332248 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-25 06:06:42.332259 | orchestrator | Wednesday 25 March 2026 06:06:13 +0000 (0:00:01.625) 0:58:30.711 ******* 2026-03-25 06:06:42.332270 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:06:42.332296 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:06:42.332308 | orchestrator | 2026-03-25 06:06:42.332319 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 06:06:42.332329 | orchestrator | Wednesday 25 March 2026 06:06:16 +0000 (0:00:02.465) 0:58:33.176 ******* 2026-03-25 06:06:42.332340 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332351 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332362 | orchestrator | 2026-03-25 06:06:42.332379 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 06:06:42.332391 | orchestrator | Wednesday 25 March 2026 06:06:17 +0000 (0:00:01.309) 0:58:34.486 ******* 2026-03-25 06:06:42.332401 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332412 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332423 | orchestrator | 2026-03-25 06:06:42.332434 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 06:06:42.332444 | orchestrator | Wednesday 25 March 2026 06:06:18 +0000 (0:00:01.303) 0:58:35.789 ******* 2026-03-25 06:06:42.332462 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332473 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332484 | orchestrator | 2026-03-25 06:06:42.332494 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 06:06:42.332505 | orchestrator | Wednesday 25 March 2026 06:06:20 +0000 (0:00:01.274) 0:58:37.063 ******* 2026-03-25 06:06:42.332515 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332526 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332537 | orchestrator | 2026-03-25 06:06:42.332547 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 06:06:42.332558 | orchestrator | Wednesday 25 March 2026 06:06:21 +0000 (0:00:01.296) 0:58:38.360 ******* 2026-03-25 06:06:42.332568 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332579 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332590 | orchestrator | 2026-03-25 06:06:42.332600 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 06:06:42.332611 | orchestrator | Wednesday 25 March 2026 06:06:22 +0000 (0:00:01.262) 0:58:39.622 ******* 2026-03-25 06:06:42.332622 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332632 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332643 | orchestrator | 2026-03-25 06:06:42.332654 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 06:06:42.332664 | orchestrator | Wednesday 25 March 2026 06:06:23 +0000 (0:00:01.298) 0:58:40.921 ******* 2026-03-25 06:06:42.332675 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332685 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332696 | orchestrator | 2026-03-25 06:06:42.332707 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 06:06:42.332717 | orchestrator | Wednesday 25 March 2026 06:06:25 +0000 (0:00:01.661) 0:58:42.583 ******* 2026-03-25 06:06:42.332728 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332739 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332749 | orchestrator | 2026-03-25 06:06:42.332760 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 06:06:42.332771 | orchestrator | Wednesday 25 March 2026 06:06:26 +0000 (0:00:01.266) 0:58:43.849 ******* 2026-03-25 06:06:42.332781 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332792 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332803 | orchestrator | 2026-03-25 06:06:42.332813 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 06:06:42.332824 | orchestrator | Wednesday 25 March 2026 06:06:28 +0000 (0:00:01.246) 0:58:45.095 ******* 2026-03-25 06:06:42.332858 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332869 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332880 | orchestrator | 2026-03-25 06:06:42.332890 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 06:06:42.332901 | orchestrator | Wednesday 25 March 2026 06:06:29 +0000 (0:00:01.410) 0:58:46.505 ******* 2026-03-25 06:06:42.332912 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:06:42.332922 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:06:42.332933 | orchestrator | 2026-03-25 06:06:42.332943 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 06:06:42.332954 | orchestrator | Wednesday 25 March 2026 06:06:30 +0000 (0:00:01.300) 0:58:47.806 ******* 2026-03-25 06:06:42.332965 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-25 06:06:42.332975 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-25 06:06:42.332986 | orchestrator | 2026-03-25 06:06:42.332996 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 06:06:42.333007 | orchestrator | Wednesday 25 March 2026 06:06:35 +0000 (0:00:04.882) 0:58:52.688 ******* 2026-03-25 06:06:42.333018 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:06:42.333036 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:06:42.333047 | orchestrator | 2026-03-25 06:06:42.333057 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 06:06:42.333068 | orchestrator | Wednesday 25 March 2026 06:06:37 +0000 (0:00:01.567) 0:58:54.256 ******* 2026-03-25 06:06:42.333081 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-25 06:06:42.333102 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-25 06:07:31.506148 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-25 06:07:31.506292 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-25 06:07:31.506313 | orchestrator | 2026-03-25 06:07:31.506326 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 06:07:31.506342 | orchestrator | Wednesday 25 March 2026 06:06:42 +0000 (0:00:05.083) 0:58:59.339 ******* 2026-03-25 06:07:31.506361 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.506380 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:07:31.506397 | orchestrator | 2026-03-25 06:07:31.506415 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 06:07:31.506434 | orchestrator | Wednesday 25 March 2026 06:06:43 +0000 (0:00:01.207) 0:59:00.546 ******* 2026-03-25 06:07:31.506450 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.506469 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:07:31.506488 | orchestrator | 2026-03-25 06:07:31.506508 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:07:31.506529 | orchestrator | Wednesday 25 March 2026 06:06:44 +0000 (0:00:01.385) 0:59:01.932 ******* 2026-03-25 06:07:31.506543 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.506554 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:07:31.506565 | orchestrator | 2026-03-25 06:07:31.506577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:07:31.506588 | orchestrator | Wednesday 25 March 2026 06:06:46 +0000 (0:00:01.321) 0:59:03.253 ******* 2026-03-25 06:07:31.506599 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.506610 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:07:31.506622 | orchestrator | 2026-03-25 06:07:31.506641 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:07:31.506659 | orchestrator | Wednesday 25 March 2026 06:06:47 +0000 (0:00:01.337) 0:59:04.591 ******* 2026-03-25 06:07:31.506678 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.506696 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:07:31.506707 | orchestrator | 2026-03-25 06:07:31.506718 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:07:31.506753 | orchestrator | Wednesday 25 March 2026 06:06:48 +0000 (0:00:01.304) 0:59:05.895 ******* 2026-03-25 06:07:31.506765 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:31.506776 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:07:31.506787 | orchestrator | 2026-03-25 06:07:31.506797 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:07:31.506808 | orchestrator | Wednesday 25 March 2026 06:06:50 +0000 (0:00:01.871) 0:59:07.767 ******* 2026-03-25 06:07:31.506819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:07:31.506867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:07:31.506879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:07:31.506891 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.506901 | orchestrator | 2026-03-25 06:07:31.506912 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:07:31.506923 | orchestrator | Wednesday 25 March 2026 06:06:52 +0000 (0:00:01.434) 0:59:09.201 ******* 2026-03-25 06:07:31.506934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:07:31.506945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:07:31.506955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:07:31.506966 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.506977 | orchestrator | 2026-03-25 06:07:31.506988 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:07:31.506999 | orchestrator | Wednesday 25 March 2026 06:06:53 +0000 (0:00:01.447) 0:59:10.649 ******* 2026-03-25 06:07:31.507010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:07:31.507021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:07:31.507032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:07:31.507043 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.507054 | orchestrator | 2026-03-25 06:07:31.507065 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:07:31.507075 | orchestrator | Wednesday 25 March 2026 06:06:55 +0000 (0:00:01.428) 0:59:12.078 ******* 2026-03-25 06:07:31.507086 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:31.507097 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:07:31.507108 | orchestrator | 2026-03-25 06:07:31.507118 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:07:31.507129 | orchestrator | Wednesday 25 March 2026 06:06:56 +0000 (0:00:01.363) 0:59:13.442 ******* 2026-03-25 06:07:31.507140 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 06:07:31.507150 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 06:07:31.507161 | orchestrator | 2026-03-25 06:07:31.507172 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 06:07:31.507183 | orchestrator | Wednesday 25 March 2026 06:06:57 +0000 (0:00:01.557) 0:59:15.000 ******* 2026-03-25 06:07:31.507193 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:31.507204 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:07:31.507215 | orchestrator | 2026-03-25 06:07:31.507247 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-25 06:07:31.507267 | orchestrator | Wednesday 25 March 2026 06:07:00 +0000 (0:00:02.147) 0:59:17.147 ******* 2026-03-25 06:07:31.507279 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.507290 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:07:31.507301 | orchestrator | 2026-03-25 06:07:31.507311 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-25 06:07:31.507322 | orchestrator | Wednesday 25 March 2026 06:07:01 +0000 (0:00:01.269) 0:59:18.417 ******* 2026-03-25 06:07:31.507333 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-5 2026-03-25 06:07:31.507344 | orchestrator | 2026-03-25 06:07:31.507355 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-25 06:07:31.507374 | orchestrator | Wednesday 25 March 2026 06:07:02 +0000 (0:00:01.238) 0:59:19.656 ******* 2026-03-25 06:07:31.507385 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-25 06:07:31.507396 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-25 06:07:31.507406 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-25 06:07:31.507417 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-25 06:07:31.507427 | orchestrator | 2026-03-25 06:07:31.507438 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-25 06:07:31.507449 | orchestrator | Wednesday 25 March 2026 06:07:04 +0000 (0:00:01.908) 0:59:21.564 ******* 2026-03-25 06:07:31.507459 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:07:31.507470 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 06:07:31.507481 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 06:07:31.507491 | orchestrator | 2026-03-25 06:07:31.507502 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-25 06:07:31.507513 | orchestrator | Wednesday 25 March 2026 06:07:07 +0000 (0:00:03.132) 0:59:24.697 ******* 2026-03-25 06:07:31.507523 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-25 06:07:31.507534 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 06:07:31.507545 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:31.507556 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-25 06:07:31.507566 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-25 06:07:31.507577 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:07:31.507587 | orchestrator | 2026-03-25 06:07:31.507598 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-25 06:07:31.507609 | orchestrator | Wednesday 25 March 2026 06:07:09 +0000 (0:00:02.116) 0:59:26.813 ******* 2026-03-25 06:07:31.507620 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:31.507630 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:07:31.507641 | orchestrator | 2026-03-25 06:07:31.507652 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-25 06:07:31.507663 | orchestrator | Wednesday 25 March 2026 06:07:11 +0000 (0:00:01.940) 0:59:28.754 ******* 2026-03-25 06:07:31.507673 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.507684 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:07:31.507695 | orchestrator | 2026-03-25 06:07:31.507705 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-25 06:07:31.507716 | orchestrator | Wednesday 25 March 2026 06:07:13 +0000 (0:00:01.346) 0:59:30.100 ******* 2026-03-25 06:07:31.507727 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-5 2026-03-25 06:07:31.507738 | orchestrator | 2026-03-25 06:07:31.507748 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-25 06:07:31.507759 | orchestrator | Wednesday 25 March 2026 06:07:14 +0000 (0:00:01.215) 0:59:31.315 ******* 2026-03-25 06:07:31.507770 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-5 2026-03-25 06:07:31.507781 | orchestrator | 2026-03-25 06:07:31.507791 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-25 06:07:31.507802 | orchestrator | Wednesday 25 March 2026 06:07:15 +0000 (0:00:01.298) 0:59:32.614 ******* 2026-03-25 06:07:31.507813 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:31.507823 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:07:31.507864 | orchestrator | 2026-03-25 06:07:31.507875 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-25 06:07:31.507886 | orchestrator | Wednesday 25 March 2026 06:07:17 +0000 (0:00:02.212) 0:59:34.827 ******* 2026-03-25 06:07:31.507897 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:31.507908 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:07:31.507918 | orchestrator | 2026-03-25 06:07:31.507929 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-25 06:07:31.507947 | orchestrator | Wednesday 25 March 2026 06:07:20 +0000 (0:00:02.356) 0:59:37.184 ******* 2026-03-25 06:07:31.507957 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:31.507968 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:07:31.507979 | orchestrator | 2026-03-25 06:07:31.507989 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-25 06:07:31.508000 | orchestrator | Wednesday 25 March 2026 06:07:22 +0000 (0:00:02.334) 0:59:39.518 ******* 2026-03-25 06:07:31.508011 | orchestrator | changed: [testbed-node-3] 2026-03-25 06:07:31.508022 | orchestrator | changed: [testbed-node-5] 2026-03-25 06:07:31.508032 | orchestrator | 2026-03-25 06:07:31.508043 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-25 06:07:31.508054 | orchestrator | Wednesday 25 March 2026 06:07:26 +0000 (0:00:03.549) 0:59:43.068 ******* 2026-03-25 06:07:31.508065 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:31.508075 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:07:31.508086 | orchestrator | 2026-03-25 06:07:31.508097 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-03-25 06:07:31.508107 | orchestrator | Wednesday 25 March 2026 06:07:27 +0000 (0:00:01.781) 0:59:44.849 ******* 2026-03-25 06:07:31.508118 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:31.508135 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-25 06:07:55.464203 | orchestrator | 2026-03-25 06:07:55.464337 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-25 06:07:55.464356 | orchestrator | 2026-03-25 06:07:55.464368 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 06:07:55.464379 | orchestrator | Wednesday 25 March 2026 06:07:31 +0000 (0:00:03.654) 0:59:48.504 ******* 2026-03-25 06:07:55.464390 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-25 06:07:55.464401 | orchestrator | 2026-03-25 06:07:55.464412 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 06:07:55.464423 | orchestrator | Wednesday 25 March 2026 06:07:32 +0000 (0:00:01.377) 0:59:49.882 ******* 2026-03-25 06:07:55.464434 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:55.464446 | orchestrator | 2026-03-25 06:07:55.464457 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 06:07:55.464468 | orchestrator | Wednesday 25 March 2026 06:07:34 +0000 (0:00:01.464) 0:59:51.347 ******* 2026-03-25 06:07:55.464478 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:55.464489 | orchestrator | 2026-03-25 06:07:55.464500 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 06:07:55.464511 | orchestrator | Wednesday 25 March 2026 06:07:35 +0000 (0:00:01.142) 0:59:52.489 ******* 2026-03-25 06:07:55.464521 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:55.464532 | orchestrator | 2026-03-25 06:07:55.464543 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 06:07:55.464553 | orchestrator | Wednesday 25 March 2026 06:07:36 +0000 (0:00:01.443) 0:59:53.933 ******* 2026-03-25 06:07:55.464564 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:55.464575 | orchestrator | 2026-03-25 06:07:55.464586 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 06:07:55.464596 | orchestrator | Wednesday 25 March 2026 06:07:38 +0000 (0:00:01.186) 0:59:55.120 ******* 2026-03-25 06:07:55.464607 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:55.464618 | orchestrator | 2026-03-25 06:07:55.464629 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 06:07:55.464640 | orchestrator | Wednesday 25 March 2026 06:07:39 +0000 (0:00:01.146) 0:59:56.266 ******* 2026-03-25 06:07:55.464650 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:55.464661 | orchestrator | 2026-03-25 06:07:55.464672 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 06:07:55.464684 | orchestrator | Wednesday 25 March 2026 06:07:40 +0000 (0:00:01.230) 0:59:57.496 ******* 2026-03-25 06:07:55.464716 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:55.464728 | orchestrator | 2026-03-25 06:07:55.464739 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 06:07:55.464750 | orchestrator | Wednesday 25 March 2026 06:07:41 +0000 (0:00:01.177) 0:59:58.674 ******* 2026-03-25 06:07:55.464761 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:55.464771 | orchestrator | 2026-03-25 06:07:55.464783 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 06:07:55.464794 | orchestrator | Wednesday 25 March 2026 06:07:42 +0000 (0:00:01.137) 0:59:59.812 ******* 2026-03-25 06:07:55.464804 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:07:55.464815 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:07:55.464826 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:07:55.464858 | orchestrator | 2026-03-25 06:07:55.464870 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 06:07:55.464880 | orchestrator | Wednesday 25 March 2026 06:07:44 +0000 (0:00:02.122) 1:00:01.935 ******* 2026-03-25 06:07:55.464891 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:07:55.464902 | orchestrator | 2026-03-25 06:07:55.464912 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 06:07:55.464923 | orchestrator | Wednesday 25 March 2026 06:07:46 +0000 (0:00:01.274) 1:00:03.210 ******* 2026-03-25 06:07:55.464934 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:07:55.464945 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:07:55.464955 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:07:55.464966 | orchestrator | 2026-03-25 06:07:55.464977 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 06:07:55.464987 | orchestrator | Wednesday 25 March 2026 06:07:49 +0000 (0:00:03.258) 1:00:06.468 ******* 2026-03-25 06:07:55.464998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 06:07:55.465009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 06:07:55.465020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 06:07:55.465030 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:55.465041 | orchestrator | 2026-03-25 06:07:55.465052 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 06:07:55.465062 | orchestrator | Wednesday 25 March 2026 06:07:51 +0000 (0:00:01.920) 1:00:08.389 ******* 2026-03-25 06:07:55.465075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 06:07:55.465088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 06:07:55.465123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 06:07:55.465136 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:55.465146 | orchestrator | 2026-03-25 06:07:55.465157 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 06:07:55.465168 | orchestrator | Wednesday 25 March 2026 06:07:53 +0000 (0:00:01.660) 1:00:10.049 ******* 2026-03-25 06:07:55.465182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:07:55.465205 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:07:55.465217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:07:55.465228 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:07:55.465239 | orchestrator | 2026-03-25 06:07:55.465250 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 06:07:55.465260 | orchestrator | Wednesday 25 March 2026 06:07:54 +0000 (0:00:01.197) 1:00:11.247 ******* 2026-03-25 06:07:55.465273 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 06:07:47.130184', 'end': '2026-03-25 06:07:47.186185', 'delta': '0:00:00.056001', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 06:07:55.465287 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 06:07:47.696126', 'end': '2026-03-25 06:07:47.738027', 'delta': '0:00:00.041901', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 06:07:55.465298 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 06:07:48.248524', 'end': '2026-03-25 06:07:48.291087', 'delta': '0:00:00.042563', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 06:07:55.465310 | orchestrator | 2026-03-25 06:07:55.465333 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 06:08:13.416586 | orchestrator | Wednesday 25 March 2026 06:07:55 +0000 (0:00:01.220) 1:00:12.468 ******* 2026-03-25 06:08:13.416705 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:13.416746 | orchestrator | 2026-03-25 06:08:13.416761 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 06:08:13.416772 | orchestrator | Wednesday 25 March 2026 06:07:56 +0000 (0:00:01.288) 1:00:13.757 ******* 2026-03-25 06:08:13.416783 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:13.416796 | orchestrator | 2026-03-25 06:08:13.416807 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 06:08:13.416819 | orchestrator | Wednesday 25 March 2026 06:07:58 +0000 (0:00:01.297) 1:00:15.055 ******* 2026-03-25 06:08:13.416890 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:13.416903 | orchestrator | 2026-03-25 06:08:13.416915 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 06:08:13.416926 | orchestrator | Wednesday 25 March 2026 06:07:59 +0000 (0:00:01.154) 1:00:16.209 ******* 2026-03-25 06:08:13.416937 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-25 06:08:13.416948 | orchestrator | 2026-03-25 06:08:13.416959 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 06:08:13.416970 | orchestrator | Wednesday 25 March 2026 06:08:01 +0000 (0:00:02.022) 1:00:18.233 ******* 2026-03-25 06:08:13.416981 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:13.416992 | orchestrator | 2026-03-25 06:08:13.417003 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 06:08:13.417014 | orchestrator | Wednesday 25 March 2026 06:08:02 +0000 (0:00:01.201) 1:00:19.434 ******* 2026-03-25 06:08:13.417025 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:13.417036 | orchestrator | 2026-03-25 06:08:13.417047 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 06:08:13.417058 | orchestrator | Wednesday 25 March 2026 06:08:03 +0000 (0:00:01.097) 1:00:20.531 ******* 2026-03-25 06:08:13.417069 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:13.417080 | orchestrator | 2026-03-25 06:08:13.417090 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 06:08:13.417101 | orchestrator | Wednesday 25 March 2026 06:08:04 +0000 (0:00:01.259) 1:00:21.791 ******* 2026-03-25 06:08:13.417112 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:13.417123 | orchestrator | 2026-03-25 06:08:13.417137 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 06:08:13.417149 | orchestrator | Wednesday 25 March 2026 06:08:06 +0000 (0:00:01.232) 1:00:23.023 ******* 2026-03-25 06:08:13.417162 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:13.417175 | orchestrator | 2026-03-25 06:08:13.417188 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 06:08:13.417200 | orchestrator | Wednesday 25 March 2026 06:08:07 +0000 (0:00:01.199) 1:00:24.222 ******* 2026-03-25 06:08:13.417213 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:13.417225 | orchestrator | 2026-03-25 06:08:13.417238 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 06:08:13.417251 | orchestrator | Wednesday 25 March 2026 06:08:08 +0000 (0:00:01.258) 1:00:25.481 ******* 2026-03-25 06:08:13.417263 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:13.417276 | orchestrator | 2026-03-25 06:08:13.417290 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 06:08:13.417302 | orchestrator | Wednesday 25 March 2026 06:08:09 +0000 (0:00:01.160) 1:00:26.641 ******* 2026-03-25 06:08:13.417315 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:13.417328 | orchestrator | 2026-03-25 06:08:13.417340 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 06:08:13.417352 | orchestrator | Wednesday 25 March 2026 06:08:10 +0000 (0:00:01.203) 1:00:27.844 ******* 2026-03-25 06:08:13.417364 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:13.417377 | orchestrator | 2026-03-25 06:08:13.417390 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 06:08:13.417403 | orchestrator | Wednesday 25 March 2026 06:08:11 +0000 (0:00:01.119) 1:00:28.964 ******* 2026-03-25 06:08:13.417424 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:13.417437 | orchestrator | 2026-03-25 06:08:13.417450 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 06:08:13.417464 | orchestrator | Wednesday 25 March 2026 06:08:13 +0000 (0:00:01.188) 1:00:30.153 ******* 2026-03-25 06:08:13.417479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:08:13.417496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'uuids': ['a582f89c-a8ac-4a87-8a0b-f7c0ca2abef4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8']}})  2026-03-25 06:08:13.417545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99e65ea9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:08:13.417560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f']}})  2026-03-25 06:08:13.417573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:08:13.417585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:08:13.417597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-42-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 06:08:13.417619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:08:13.417631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo', 'dm-uuid-CRYPT-LUKS2-10d41a0c964d43008e142cbf5f4d58c4-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:08:13.417656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:08:14.788331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'uuids': ['10d41a0c-964d-4300-8e14-2cbf5f4d58c4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo']}})  2026-03-25 06:08:14.788456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e']}})  2026-03-25 06:08:14.788483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:08:14.788531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5418d243', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:08:14.788605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:08:14.788628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:08:14.788647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8', 'dm-uuid-CRYPT-LUKS2-a582f89ca8ac4a878a0bf7c0ca2abef4-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:08:14.788666 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:14.788685 | orchestrator | 2026-03-25 06:08:14.788702 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 06:08:14.788719 | orchestrator | Wednesday 25 March 2026 06:08:14 +0000 (0:00:01.411) 1:00:31.564 ******* 2026-03-25 06:08:14.788738 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:14.788769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e', 'dm-uuid-LVM-I4brnFGe2wqMxfNLTgnFWAlpGdDDIQ6ufudluz5gbOp2W0Ru1BAN3Lof8sluy2g8'], 'uuids': ['a582f89c-a8ac-4a87-8a0b-f7c0ca2abef4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:14.788798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981', 'scsi-SQEMU_QEMU_HARDDISK_99e65ea9-8a8c-4114-a95e-6d6b779e8981'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99e65ea9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:14.788862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I510NI-gVOy-fVrn-Rpok-wKnF-L9wv-pxblpK', 'scsi-0QEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1', 'scsi-SQEMU_QEMU_HARDDISK_e0cf0e31-edea-4833-ac86-8b3021cd24a1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-42-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032443 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo', 'dm-uuid-CRYPT-LUKS2-10d41a0c964d43008e142cbf5f4d58c4-63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a7f517e2--016b--5c10--ac21--20c48339115f-osd--block--a7f517e2--016b--5c10--ac21--20c48339115f', 'dm-uuid-LVM-ppL9nqq4Eft0DXjzsCdcW3axPqGhidIo63eFyg4nkEr3IXy7pO0UwAAWeQ8GeZyo'], 'uuids': ['10d41a0c-964d-4300-8e14-2cbf5f4d58c4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0cf0e31', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['63eFyg-4nkE-r3IX-y7pO-0UwA-AWeQ-8GeZyo']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ot6f5w-cwBB-rMe8-ml4g-P1Wb-D3d5-I1RZ9d', 'scsi-0QEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6', 'scsi-SQEMU_QEMU_HARDDISK_eaa5e6a9-2c24-4b33-854e-103871b2e9c6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eaa5e6a9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2eb637af--fcba--56ed--b416--856a8f376a6e-osd--block--2eb637af--fcba--56ed--b416--856a8f376a6e']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:16.032579 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5418d243', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1', 'scsi-SQEMU_QEMU_HARDDISK_5418d243-c22a-425d-8a7d-7c43bd549130-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:46.207707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:46.207819 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:46.207900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8', 'dm-uuid-CRYPT-LUKS2-a582f89ca8ac4a878a0bf7c0ca2abef4-fudluz-5gbO-p2W0-Ru1B-AN3L-of8s-luy2g8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:08:46.207915 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.207928 | orchestrator | 2026-03-25 06:08:46.207939 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 06:08:46.207951 | orchestrator | Wednesday 25 March 2026 06:08:16 +0000 (0:00:01.477) 1:00:33.041 ******* 2026-03-25 06:08:46.207964 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:46.207982 | orchestrator | 2026-03-25 06:08:46.207999 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 06:08:46.208015 | orchestrator | Wednesday 25 March 2026 06:08:17 +0000 (0:00:01.513) 1:00:34.555 ******* 2026-03-25 06:08:46.208030 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:46.208048 | orchestrator | 2026-03-25 06:08:46.208067 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:08:46.208101 | orchestrator | Wednesday 25 March 2026 06:08:18 +0000 (0:00:01.127) 1:00:35.682 ******* 2026-03-25 06:08:46.208112 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:46.208122 | orchestrator | 2026-03-25 06:08:46.208132 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:08:46.208141 | orchestrator | Wednesday 25 March 2026 06:08:21 +0000 (0:00:02.495) 1:00:38.178 ******* 2026-03-25 06:08:46.208151 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208161 | orchestrator | 2026-03-25 06:08:46.208170 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:08:46.208180 | orchestrator | Wednesday 25 March 2026 06:08:22 +0000 (0:00:01.158) 1:00:39.336 ******* 2026-03-25 06:08:46.208190 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208199 | orchestrator | 2026-03-25 06:08:46.208209 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:08:46.208218 | orchestrator | Wednesday 25 March 2026 06:08:23 +0000 (0:00:01.274) 1:00:40.611 ******* 2026-03-25 06:08:46.208228 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208237 | orchestrator | 2026-03-25 06:08:46.208249 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 06:08:46.208281 | orchestrator | Wednesday 25 March 2026 06:08:24 +0000 (0:00:01.226) 1:00:41.838 ******* 2026-03-25 06:08:46.208293 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-25 06:08:46.208306 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-25 06:08:46.208317 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-25 06:08:46.208329 | orchestrator | 2026-03-25 06:08:46.208340 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 06:08:46.208351 | orchestrator | Wednesday 25 March 2026 06:08:26 +0000 (0:00:02.151) 1:00:43.990 ******* 2026-03-25 06:08:46.208362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-25 06:08:46.208374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-25 06:08:46.208385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-25 06:08:46.208396 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208408 | orchestrator | 2026-03-25 06:08:46.208420 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 06:08:46.208433 | orchestrator | Wednesday 25 March 2026 06:08:28 +0000 (0:00:01.176) 1:00:45.167 ******* 2026-03-25 06:08:46.208463 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-25 06:08:46.208477 | orchestrator | 2026-03-25 06:08:46.208490 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:08:46.208504 | orchestrator | Wednesday 25 March 2026 06:08:29 +0000 (0:00:01.127) 1:00:46.295 ******* 2026-03-25 06:08:46.208517 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208529 | orchestrator | 2026-03-25 06:08:46.208541 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:08:46.208554 | orchestrator | Wednesday 25 March 2026 06:08:30 +0000 (0:00:01.157) 1:00:47.453 ******* 2026-03-25 06:08:46.208566 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208578 | orchestrator | 2026-03-25 06:08:46.208590 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:08:46.208602 | orchestrator | Wednesday 25 March 2026 06:08:31 +0000 (0:00:01.140) 1:00:48.594 ******* 2026-03-25 06:08:46.208615 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208626 | orchestrator | 2026-03-25 06:08:46.208639 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:08:46.208652 | orchestrator | Wednesday 25 March 2026 06:08:32 +0000 (0:00:01.203) 1:00:49.798 ******* 2026-03-25 06:08:46.208665 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:46.208677 | orchestrator | 2026-03-25 06:08:46.208690 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:08:46.208701 | orchestrator | Wednesday 25 March 2026 06:08:34 +0000 (0:00:01.299) 1:00:51.097 ******* 2026-03-25 06:08:46.208712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:08:46.208723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:08:46.208734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:08:46.208744 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208755 | orchestrator | 2026-03-25 06:08:46.208766 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:08:46.208777 | orchestrator | Wednesday 25 March 2026 06:08:35 +0000 (0:00:01.480) 1:00:52.577 ******* 2026-03-25 06:08:46.208787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:08:46.208798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:08:46.208808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:08:46.208819 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208830 | orchestrator | 2026-03-25 06:08:46.208870 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:08:46.208881 | orchestrator | Wednesday 25 March 2026 06:08:37 +0000 (0:00:01.448) 1:00:54.025 ******* 2026-03-25 06:08:46.208902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:08:46.208913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:08:46.208923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:08:46.208934 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:08:46.208945 | orchestrator | 2026-03-25 06:08:46.208955 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:08:46.208966 | orchestrator | Wednesday 25 March 2026 06:08:38 +0000 (0:00:01.412) 1:00:55.438 ******* 2026-03-25 06:08:46.208977 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:08:46.208988 | orchestrator | 2026-03-25 06:08:46.208998 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:08:46.209009 | orchestrator | Wednesday 25 March 2026 06:08:39 +0000 (0:00:01.218) 1:00:56.657 ******* 2026-03-25 06:08:46.209025 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 06:08:46.209036 | orchestrator | 2026-03-25 06:08:46.209047 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 06:08:46.209058 | orchestrator | Wednesday 25 March 2026 06:08:41 +0000 (0:00:01.684) 1:00:58.341 ******* 2026-03-25 06:08:46.209068 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:08:46.209079 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:08:46.209090 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:08:46.209100 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 06:08:46.209111 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 06:08:46.209122 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 06:08:46.209132 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:08:46.209143 | orchestrator | 2026-03-25 06:08:46.209154 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 06:08:46.209164 | orchestrator | Wednesday 25 March 2026 06:08:43 +0000 (0:00:02.222) 1:01:00.564 ******* 2026-03-25 06:08:46.209175 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:08:46.209185 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:08:46.209196 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:08:46.209207 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-25 06:08:46.209218 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 06:08:46.209228 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 06:08:46.209239 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:08:46.209250 | orchestrator | 2026-03-25 06:08:46.209268 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-25 06:09:39.461984 | orchestrator | Wednesday 25 March 2026 06:08:46 +0000 (0:00:02.639) 1:01:03.204 ******* 2026-03-25 06:09:39.462164 | orchestrator | changed: [testbed-node-3] 2026-03-25 06:09:39.462183 | orchestrator | 2026-03-25 06:09:39.462196 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-25 06:09:39.462208 | orchestrator | Wednesday 25 March 2026 06:08:48 +0000 (0:00:02.254) 1:01:05.458 ******* 2026-03-25 06:09:39.462222 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:09:39.462234 | orchestrator | 2026-03-25 06:09:39.462245 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-25 06:09:39.462257 | orchestrator | Wednesday 25 March 2026 06:08:51 +0000 (0:00:02.966) 1:01:08.425 ******* 2026-03-25 06:09:39.462292 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:09:39.462304 | orchestrator | 2026-03-25 06:09:39.462316 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 06:09:39.462327 | orchestrator | Wednesday 25 March 2026 06:08:53 +0000 (0:00:02.326) 1:01:10.752 ******* 2026-03-25 06:09:39.462339 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-25 06:09:39.462351 | orchestrator | 2026-03-25 06:09:39.462360 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 06:09:39.462370 | orchestrator | Wednesday 25 March 2026 06:08:54 +0000 (0:00:01.147) 1:01:11.900 ******* 2026-03-25 06:09:39.462380 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-25 06:09:39.462390 | orchestrator | 2026-03-25 06:09:39.462400 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 06:09:39.462410 | orchestrator | Wednesday 25 March 2026 06:08:56 +0000 (0:00:01.189) 1:01:13.089 ******* 2026-03-25 06:09:39.462421 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.462432 | orchestrator | 2026-03-25 06:09:39.462443 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 06:09:39.462456 | orchestrator | Wednesday 25 March 2026 06:08:57 +0000 (0:00:01.195) 1:01:14.285 ******* 2026-03-25 06:09:39.462467 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.462480 | orchestrator | 2026-03-25 06:09:39.462493 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 06:09:39.462506 | orchestrator | Wednesday 25 March 2026 06:08:58 +0000 (0:00:01.538) 1:01:15.824 ******* 2026-03-25 06:09:39.462519 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.462530 | orchestrator | 2026-03-25 06:09:39.462542 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 06:09:39.462554 | orchestrator | Wednesday 25 March 2026 06:09:00 +0000 (0:00:01.511) 1:01:17.336 ******* 2026-03-25 06:09:39.462565 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.462578 | orchestrator | 2026-03-25 06:09:39.462590 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 06:09:39.462601 | orchestrator | Wednesday 25 March 2026 06:09:01 +0000 (0:00:01.563) 1:01:18.899 ******* 2026-03-25 06:09:39.462613 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.462626 | orchestrator | 2026-03-25 06:09:39.462639 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 06:09:39.462651 | orchestrator | Wednesday 25 March 2026 06:09:03 +0000 (0:00:01.130) 1:01:20.030 ******* 2026-03-25 06:09:39.462662 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.462673 | orchestrator | 2026-03-25 06:09:39.462702 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 06:09:39.462713 | orchestrator | Wednesday 25 March 2026 06:09:04 +0000 (0:00:01.159) 1:01:21.190 ******* 2026-03-25 06:09:39.462724 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.462735 | orchestrator | 2026-03-25 06:09:39.462746 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 06:09:39.462756 | orchestrator | Wednesday 25 March 2026 06:09:05 +0000 (0:00:01.142) 1:01:22.332 ******* 2026-03-25 06:09:39.462767 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.462777 | orchestrator | 2026-03-25 06:09:39.462787 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 06:09:39.462798 | orchestrator | Wednesday 25 March 2026 06:09:06 +0000 (0:00:01.554) 1:01:23.886 ******* 2026-03-25 06:09:39.462809 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.462819 | orchestrator | 2026-03-25 06:09:39.462830 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 06:09:39.462882 | orchestrator | Wednesday 25 March 2026 06:09:08 +0000 (0:00:01.509) 1:01:25.396 ******* 2026-03-25 06:09:39.462893 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.462917 | orchestrator | 2026-03-25 06:09:39.462929 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 06:09:39.462939 | orchestrator | Wednesday 25 March 2026 06:09:09 +0000 (0:00:01.180) 1:01:26.577 ******* 2026-03-25 06:09:39.462949 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.462960 | orchestrator | 2026-03-25 06:09:39.462971 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 06:09:39.462982 | orchestrator | Wednesday 25 March 2026 06:09:10 +0000 (0:00:01.128) 1:01:27.706 ******* 2026-03-25 06:09:39.462993 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.463003 | orchestrator | 2026-03-25 06:09:39.463013 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 06:09:39.463026 | orchestrator | Wednesday 25 March 2026 06:09:11 +0000 (0:00:01.189) 1:01:28.896 ******* 2026-03-25 06:09:39.463035 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.463046 | orchestrator | 2026-03-25 06:09:39.463057 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 06:09:39.463068 | orchestrator | Wednesday 25 March 2026 06:09:13 +0000 (0:00:01.145) 1:01:30.041 ******* 2026-03-25 06:09:39.463079 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.463091 | orchestrator | 2026-03-25 06:09:39.463125 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 06:09:39.463136 | orchestrator | Wednesday 25 March 2026 06:09:14 +0000 (0:00:01.176) 1:01:31.218 ******* 2026-03-25 06:09:39.463146 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463158 | orchestrator | 2026-03-25 06:09:39.463169 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 06:09:39.463180 | orchestrator | Wednesday 25 March 2026 06:09:15 +0000 (0:00:01.174) 1:01:32.392 ******* 2026-03-25 06:09:39.463192 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463202 | orchestrator | 2026-03-25 06:09:39.463212 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 06:09:39.463221 | orchestrator | Wednesday 25 March 2026 06:09:16 +0000 (0:00:01.198) 1:01:33.591 ******* 2026-03-25 06:09:39.463230 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463240 | orchestrator | 2026-03-25 06:09:39.463250 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 06:09:39.463258 | orchestrator | Wednesday 25 March 2026 06:09:17 +0000 (0:00:01.213) 1:01:34.804 ******* 2026-03-25 06:09:39.463266 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.463276 | orchestrator | 2026-03-25 06:09:39.463285 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 06:09:39.463295 | orchestrator | Wednesday 25 March 2026 06:09:18 +0000 (0:00:01.163) 1:01:35.968 ******* 2026-03-25 06:09:39.463305 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.463315 | orchestrator | 2026-03-25 06:09:39.463325 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 06:09:39.463335 | orchestrator | Wednesday 25 March 2026 06:09:20 +0000 (0:00:01.225) 1:01:37.193 ******* 2026-03-25 06:09:39.463345 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463354 | orchestrator | 2026-03-25 06:09:39.463364 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 06:09:39.463375 | orchestrator | Wednesday 25 March 2026 06:09:21 +0000 (0:00:01.192) 1:01:38.386 ******* 2026-03-25 06:09:39.463385 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463395 | orchestrator | 2026-03-25 06:09:39.463405 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 06:09:39.463416 | orchestrator | Wednesday 25 March 2026 06:09:22 +0000 (0:00:01.145) 1:01:39.531 ******* 2026-03-25 06:09:39.463427 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463451 | orchestrator | 2026-03-25 06:09:39.463463 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 06:09:39.463474 | orchestrator | Wednesday 25 March 2026 06:09:23 +0000 (0:00:01.208) 1:01:40.740 ******* 2026-03-25 06:09:39.463486 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463509 | orchestrator | 2026-03-25 06:09:39.463520 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 06:09:39.463532 | orchestrator | Wednesday 25 March 2026 06:09:24 +0000 (0:00:01.195) 1:01:41.935 ******* 2026-03-25 06:09:39.463542 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463553 | orchestrator | 2026-03-25 06:09:39.463562 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 06:09:39.463574 | orchestrator | Wednesday 25 March 2026 06:09:26 +0000 (0:00:01.125) 1:01:43.061 ******* 2026-03-25 06:09:39.463585 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463596 | orchestrator | 2026-03-25 06:09:39.463607 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 06:09:39.463619 | orchestrator | Wednesday 25 March 2026 06:09:27 +0000 (0:00:01.139) 1:01:44.201 ******* 2026-03-25 06:09:39.463630 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463642 | orchestrator | 2026-03-25 06:09:39.463653 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 06:09:39.463676 | orchestrator | Wednesday 25 March 2026 06:09:28 +0000 (0:00:01.118) 1:01:45.319 ******* 2026-03-25 06:09:39.463688 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463699 | orchestrator | 2026-03-25 06:09:39.463711 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 06:09:39.463722 | orchestrator | Wednesday 25 March 2026 06:09:29 +0000 (0:00:01.134) 1:01:46.453 ******* 2026-03-25 06:09:39.463734 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463745 | orchestrator | 2026-03-25 06:09:39.463757 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 06:09:39.463768 | orchestrator | Wednesday 25 March 2026 06:09:30 +0000 (0:00:01.129) 1:01:47.582 ******* 2026-03-25 06:09:39.463780 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463791 | orchestrator | 2026-03-25 06:09:39.463803 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 06:09:39.463814 | orchestrator | Wednesday 25 March 2026 06:09:31 +0000 (0:00:01.127) 1:01:48.710 ******* 2026-03-25 06:09:39.463825 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463858 | orchestrator | 2026-03-25 06:09:39.463870 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 06:09:39.463880 | orchestrator | Wednesday 25 March 2026 06:09:32 +0000 (0:00:01.207) 1:01:49.917 ******* 2026-03-25 06:09:39.463891 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:09:39.463903 | orchestrator | 2026-03-25 06:09:39.463914 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 06:09:39.463927 | orchestrator | Wednesday 25 March 2026 06:09:34 +0000 (0:00:01.133) 1:01:51.051 ******* 2026-03-25 06:09:39.463939 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.463951 | orchestrator | 2026-03-25 06:09:39.463964 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 06:09:39.463976 | orchestrator | Wednesday 25 March 2026 06:09:35 +0000 (0:00:01.962) 1:01:53.014 ******* 2026-03-25 06:09:39.463989 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:09:39.464001 | orchestrator | 2026-03-25 06:09:39.464013 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 06:09:39.464025 | orchestrator | Wednesday 25 March 2026 06:09:38 +0000 (0:00:02.224) 1:01:55.238 ******* 2026-03-25 06:09:39.464038 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-25 06:09:39.464051 | orchestrator | 2026-03-25 06:09:39.464062 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 06:09:39.464087 | orchestrator | Wednesday 25 March 2026 06:09:39 +0000 (0:00:01.220) 1:01:56.459 ******* 2026-03-25 06:10:26.316261 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.316407 | orchestrator | 2026-03-25 06:10:26.316436 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 06:10:26.316457 | orchestrator | Wednesday 25 March 2026 06:09:40 +0000 (0:00:01.166) 1:01:57.626 ******* 2026-03-25 06:10:26.316510 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.316522 | orchestrator | 2026-03-25 06:10:26.316534 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 06:10:26.316545 | orchestrator | Wednesday 25 March 2026 06:09:41 +0000 (0:00:01.160) 1:01:58.786 ******* 2026-03-25 06:10:26.316555 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 06:10:26.316566 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 06:10:26.316578 | orchestrator | 2026-03-25 06:10:26.316588 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 06:10:26.316599 | orchestrator | Wednesday 25 March 2026 06:09:43 +0000 (0:00:01.762) 1:02:00.549 ******* 2026-03-25 06:10:26.316610 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:10:26.316621 | orchestrator | 2026-03-25 06:10:26.316632 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 06:10:26.316643 | orchestrator | Wednesday 25 March 2026 06:09:44 +0000 (0:00:01.427) 1:02:01.976 ******* 2026-03-25 06:10:26.316654 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.316665 | orchestrator | 2026-03-25 06:10:26.316675 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 06:10:26.316686 | orchestrator | Wednesday 25 March 2026 06:09:46 +0000 (0:00:01.220) 1:02:03.197 ******* 2026-03-25 06:10:26.316697 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.316707 | orchestrator | 2026-03-25 06:10:26.316718 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 06:10:26.316728 | orchestrator | Wednesday 25 March 2026 06:09:47 +0000 (0:00:01.266) 1:02:04.464 ******* 2026-03-25 06:10:26.316739 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.316750 | orchestrator | 2026-03-25 06:10:26.316760 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 06:10:26.316771 | orchestrator | Wednesday 25 March 2026 06:09:48 +0000 (0:00:01.209) 1:02:05.673 ******* 2026-03-25 06:10:26.316782 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-25 06:10:26.316795 | orchestrator | 2026-03-25 06:10:26.316807 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 06:10:26.316818 | orchestrator | Wednesday 25 March 2026 06:09:49 +0000 (0:00:01.158) 1:02:06.832 ******* 2026-03-25 06:10:26.316830 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:10:26.316867 | orchestrator | 2026-03-25 06:10:26.316880 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 06:10:26.316892 | orchestrator | Wednesday 25 March 2026 06:09:51 +0000 (0:00:01.784) 1:02:08.616 ******* 2026-03-25 06:10:26.316904 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 06:10:26.316917 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 06:10:26.316929 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 06:10:26.316941 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.316953 | orchestrator | 2026-03-25 06:10:26.316965 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 06:10:26.316991 | orchestrator | Wednesday 25 March 2026 06:09:52 +0000 (0:00:01.197) 1:02:09.814 ******* 2026-03-25 06:10:26.317002 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317013 | orchestrator | 2026-03-25 06:10:26.317023 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 06:10:26.317034 | orchestrator | Wednesday 25 March 2026 06:09:53 +0000 (0:00:01.145) 1:02:10.960 ******* 2026-03-25 06:10:26.317045 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317055 | orchestrator | 2026-03-25 06:10:26.317066 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 06:10:26.317077 | orchestrator | Wednesday 25 March 2026 06:09:55 +0000 (0:00:01.206) 1:02:12.166 ******* 2026-03-25 06:10:26.317100 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317119 | orchestrator | 2026-03-25 06:10:26.317136 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 06:10:26.317151 | orchestrator | Wednesday 25 March 2026 06:09:56 +0000 (0:00:01.186) 1:02:13.353 ******* 2026-03-25 06:10:26.317167 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317183 | orchestrator | 2026-03-25 06:10:26.317200 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 06:10:26.317218 | orchestrator | Wednesday 25 March 2026 06:09:57 +0000 (0:00:01.173) 1:02:14.526 ******* 2026-03-25 06:10:26.317234 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317253 | orchestrator | 2026-03-25 06:10:26.317271 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 06:10:26.317291 | orchestrator | Wednesday 25 March 2026 06:09:58 +0000 (0:00:01.152) 1:02:15.679 ******* 2026-03-25 06:10:26.317310 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:10:26.317329 | orchestrator | 2026-03-25 06:10:26.317347 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 06:10:26.317364 | orchestrator | Wednesday 25 March 2026 06:10:01 +0000 (0:00:02.515) 1:02:18.195 ******* 2026-03-25 06:10:26.317382 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:10:26.317398 | orchestrator | 2026-03-25 06:10:26.317414 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 06:10:26.317432 | orchestrator | Wednesday 25 March 2026 06:10:02 +0000 (0:00:01.192) 1:02:19.388 ******* 2026-03-25 06:10:26.317540 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-25 06:10:26.317553 | orchestrator | 2026-03-25 06:10:26.317564 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 06:10:26.317599 | orchestrator | Wednesday 25 March 2026 06:10:03 +0000 (0:00:01.283) 1:02:20.671 ******* 2026-03-25 06:10:26.317610 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317621 | orchestrator | 2026-03-25 06:10:26.317632 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 06:10:26.317643 | orchestrator | Wednesday 25 March 2026 06:10:04 +0000 (0:00:01.150) 1:02:21.822 ******* 2026-03-25 06:10:26.317654 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317664 | orchestrator | 2026-03-25 06:10:26.317675 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 06:10:26.317686 | orchestrator | Wednesday 25 March 2026 06:10:05 +0000 (0:00:01.115) 1:02:22.937 ******* 2026-03-25 06:10:26.317697 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317707 | orchestrator | 2026-03-25 06:10:26.317718 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 06:10:26.317729 | orchestrator | Wednesday 25 March 2026 06:10:07 +0000 (0:00:01.139) 1:02:24.077 ******* 2026-03-25 06:10:26.317740 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317751 | orchestrator | 2026-03-25 06:10:26.317764 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 06:10:26.317785 | orchestrator | Wednesday 25 March 2026 06:10:08 +0000 (0:00:01.116) 1:02:25.193 ******* 2026-03-25 06:10:26.317796 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317807 | orchestrator | 2026-03-25 06:10:26.317818 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 06:10:26.317828 | orchestrator | Wednesday 25 March 2026 06:10:09 +0000 (0:00:01.119) 1:02:26.313 ******* 2026-03-25 06:10:26.317871 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317882 | orchestrator | 2026-03-25 06:10:26.317893 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 06:10:26.317904 | orchestrator | Wednesday 25 March 2026 06:10:10 +0000 (0:00:01.155) 1:02:27.469 ******* 2026-03-25 06:10:26.317914 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317925 | orchestrator | 2026-03-25 06:10:26.317935 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 06:10:26.317946 | orchestrator | Wednesday 25 March 2026 06:10:11 +0000 (0:00:01.159) 1:02:28.629 ******* 2026-03-25 06:10:26.317968 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:10:26.317979 | orchestrator | 2026-03-25 06:10:26.317990 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 06:10:26.318000 | orchestrator | Wednesday 25 March 2026 06:10:12 +0000 (0:00:01.140) 1:02:29.769 ******* 2026-03-25 06:10:26.318011 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:10:26.318090 | orchestrator | 2026-03-25 06:10:26.318101 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 06:10:26.318112 | orchestrator | Wednesday 25 March 2026 06:10:13 +0000 (0:00:01.191) 1:02:30.960 ******* 2026-03-25 06:10:26.318122 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-25 06:10:26.318133 | orchestrator | 2026-03-25 06:10:26.318144 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 06:10:26.318154 | orchestrator | Wednesday 25 March 2026 06:10:15 +0000 (0:00:01.147) 1:02:32.107 ******* 2026-03-25 06:10:26.318165 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-25 06:10:26.318176 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-25 06:10:26.318186 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-25 06:10:26.318197 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-25 06:10:26.318207 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-25 06:10:26.318226 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-25 06:10:26.318237 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-25 06:10:26.318247 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-25 06:10:26.318258 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 06:10:26.318269 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 06:10:26.318279 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 06:10:26.318289 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 06:10:26.318300 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 06:10:26.318311 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 06:10:26.318321 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-25 06:10:26.318332 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-25 06:10:26.318342 | orchestrator | 2026-03-25 06:10:26.318353 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 06:10:26.318364 | orchestrator | Wednesday 25 March 2026 06:10:21 +0000 (0:00:06.551) 1:02:38.659 ******* 2026-03-25 06:10:26.318374 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-25 06:10:26.318385 | orchestrator | 2026-03-25 06:10:26.318395 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-25 06:10:26.318406 | orchestrator | Wednesday 25 March 2026 06:10:22 +0000 (0:00:01.163) 1:02:39.823 ******* 2026-03-25 06:10:26.318417 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:10:26.318429 | orchestrator | 2026-03-25 06:10:26.318439 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-25 06:10:26.318450 | orchestrator | Wednesday 25 March 2026 06:10:24 +0000 (0:00:01.519) 1:02:41.343 ******* 2026-03-25 06:10:26.318460 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:10:26.318471 | orchestrator | 2026-03-25 06:10:26.318482 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 06:10:26.318503 | orchestrator | Wednesday 25 March 2026 06:10:26 +0000 (0:00:01.975) 1:02:43.319 ******* 2026-03-25 06:11:17.929859 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.929978 | orchestrator | 2026-03-25 06:11:17.929992 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 06:11:17.930003 | orchestrator | Wednesday 25 March 2026 06:10:27 +0000 (0:00:01.170) 1:02:44.489 ******* 2026-03-25 06:11:17.930012 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930082 | orchestrator | 2026-03-25 06:11:17.930091 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 06:11:17.930100 | orchestrator | Wednesday 25 March 2026 06:10:28 +0000 (0:00:01.127) 1:02:45.617 ******* 2026-03-25 06:11:17.930109 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930117 | orchestrator | 2026-03-25 06:11:17.930126 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 06:11:17.930135 | orchestrator | Wednesday 25 March 2026 06:10:29 +0000 (0:00:01.203) 1:02:46.820 ******* 2026-03-25 06:11:17.930143 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930152 | orchestrator | 2026-03-25 06:11:17.930161 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 06:11:17.930169 | orchestrator | Wednesday 25 March 2026 06:10:30 +0000 (0:00:01.159) 1:02:47.980 ******* 2026-03-25 06:11:17.930178 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930186 | orchestrator | 2026-03-25 06:11:17.930195 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 06:11:17.930205 | orchestrator | Wednesday 25 March 2026 06:10:32 +0000 (0:00:01.155) 1:02:49.136 ******* 2026-03-25 06:11:17.930213 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930222 | orchestrator | 2026-03-25 06:11:17.930230 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 06:11:17.930239 | orchestrator | Wednesday 25 March 2026 06:10:33 +0000 (0:00:01.160) 1:02:50.296 ******* 2026-03-25 06:11:17.930247 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930256 | orchestrator | 2026-03-25 06:11:17.930265 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 06:11:17.930273 | orchestrator | Wednesday 25 March 2026 06:10:34 +0000 (0:00:01.142) 1:02:51.439 ******* 2026-03-25 06:11:17.930282 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930291 | orchestrator | 2026-03-25 06:11:17.930299 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 06:11:17.930308 | orchestrator | Wednesday 25 March 2026 06:10:35 +0000 (0:00:01.159) 1:02:52.598 ******* 2026-03-25 06:11:17.930316 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930325 | orchestrator | 2026-03-25 06:11:17.930333 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 06:11:17.930343 | orchestrator | Wednesday 25 March 2026 06:10:36 +0000 (0:00:01.195) 1:02:53.793 ******* 2026-03-25 06:11:17.930354 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930364 | orchestrator | 2026-03-25 06:11:17.930373 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 06:11:17.930384 | orchestrator | Wednesday 25 March 2026 06:10:37 +0000 (0:00:01.177) 1:02:54.971 ******* 2026-03-25 06:11:17.930393 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930403 | orchestrator | 2026-03-25 06:11:17.930413 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 06:11:17.930423 | orchestrator | Wednesday 25 March 2026 06:10:39 +0000 (0:00:01.159) 1:02:56.130 ******* 2026-03-25 06:11:17.930446 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-25 06:11:17.930456 | orchestrator | 2026-03-25 06:11:17.930466 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 06:11:17.930475 | orchestrator | Wednesday 25 March 2026 06:10:43 +0000 (0:00:04.404) 1:03:00.534 ******* 2026-03-25 06:11:17.930486 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:11:17.930496 | orchestrator | 2026-03-25 06:11:17.930506 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 06:11:17.930525 | orchestrator | Wednesday 25 March 2026 06:10:44 +0000 (0:00:01.205) 1:03:01.740 ******* 2026-03-25 06:11:17.930538 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-25 06:11:17.930551 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-25 06:11:17.930562 | orchestrator | 2026-03-25 06:11:17.930572 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 06:11:17.930581 | orchestrator | Wednesday 25 March 2026 06:10:49 +0000 (0:00:04.784) 1:03:06.525 ******* 2026-03-25 06:11:17.930591 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930600 | orchestrator | 2026-03-25 06:11:17.930610 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 06:11:17.930620 | orchestrator | Wednesday 25 March 2026 06:10:50 +0000 (0:00:01.167) 1:03:07.692 ******* 2026-03-25 06:11:17.930629 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930639 | orchestrator | 2026-03-25 06:11:17.930648 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:11:17.930674 | orchestrator | Wednesday 25 March 2026 06:10:51 +0000 (0:00:01.130) 1:03:08.823 ******* 2026-03-25 06:11:17.930684 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930694 | orchestrator | 2026-03-25 06:11:17.930703 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:11:17.930714 | orchestrator | Wednesday 25 March 2026 06:10:52 +0000 (0:00:01.144) 1:03:09.967 ******* 2026-03-25 06:11:17.930724 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930734 | orchestrator | 2026-03-25 06:11:17.930744 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:11:17.930754 | orchestrator | Wednesday 25 March 2026 06:10:54 +0000 (0:00:01.180) 1:03:11.148 ******* 2026-03-25 06:11:17.930764 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930773 | orchestrator | 2026-03-25 06:11:17.930781 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:11:17.930813 | orchestrator | Wednesday 25 March 2026 06:10:55 +0000 (0:00:01.166) 1:03:12.315 ******* 2026-03-25 06:11:17.930823 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:11:17.930832 | orchestrator | 2026-03-25 06:11:17.930841 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:11:17.930850 | orchestrator | Wednesday 25 March 2026 06:10:56 +0000 (0:00:01.249) 1:03:13.565 ******* 2026-03-25 06:11:17.930858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:11:17.930867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:11:17.930876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:11:17.930884 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930893 | orchestrator | 2026-03-25 06:11:17.930902 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:11:17.930910 | orchestrator | Wednesday 25 March 2026 06:10:58 +0000 (0:00:01.862) 1:03:15.428 ******* 2026-03-25 06:11:17.930919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:11:17.930928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:11:17.930936 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:11:17.930945 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.930954 | orchestrator | 2026-03-25 06:11:17.930968 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:11:17.930977 | orchestrator | Wednesday 25 March 2026 06:11:00 +0000 (0:00:01.871) 1:03:17.300 ******* 2026-03-25 06:11:17.930986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-25 06:11:17.930994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-25 06:11:17.931003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-25 06:11:17.931011 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.931020 | orchestrator | 2026-03-25 06:11:17.931028 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:11:17.931037 | orchestrator | Wednesday 25 March 2026 06:11:02 +0000 (0:00:01.915) 1:03:19.215 ******* 2026-03-25 06:11:17.931045 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:11:17.931054 | orchestrator | 2026-03-25 06:11:17.931062 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:11:17.931071 | orchestrator | Wednesday 25 March 2026 06:11:03 +0000 (0:00:01.154) 1:03:20.370 ******* 2026-03-25 06:11:17.931079 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-25 06:11:17.931088 | orchestrator | 2026-03-25 06:11:17.931096 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 06:11:17.931110 | orchestrator | Wednesday 25 March 2026 06:11:04 +0000 (0:00:01.408) 1:03:21.779 ******* 2026-03-25 06:11:17.931118 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:11:17.931127 | orchestrator | 2026-03-25 06:11:17.931135 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-25 06:11:17.931144 | orchestrator | Wednesday 25 March 2026 06:11:06 +0000 (0:00:01.731) 1:03:23.511 ******* 2026-03-25 06:11:17.931152 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-03-25 06:11:17.931161 | orchestrator | 2026-03-25 06:11:17.931170 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-25 06:11:17.931178 | orchestrator | Wednesday 25 March 2026 06:11:07 +0000 (0:00:01.495) 1:03:25.006 ******* 2026-03-25 06:11:17.931187 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:11:17.931195 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 06:11:17.931204 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 06:11:17.931213 | orchestrator | 2026-03-25 06:11:17.931221 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-25 06:11:17.931230 | orchestrator | Wednesday 25 March 2026 06:11:11 +0000 (0:00:03.202) 1:03:28.209 ******* 2026-03-25 06:11:17.931238 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-25 06:11:17.931247 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-25 06:11:17.931256 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:11:17.931264 | orchestrator | 2026-03-25 06:11:17.931272 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-25 06:11:17.931281 | orchestrator | Wednesday 25 March 2026 06:11:13 +0000 (0:00:02.025) 1:03:30.234 ******* 2026-03-25 06:11:17.931289 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:11:17.931298 | orchestrator | 2026-03-25 06:11:17.931306 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-25 06:11:17.931315 | orchestrator | Wednesday 25 March 2026 06:11:14 +0000 (0:00:01.137) 1:03:31.372 ******* 2026-03-25 06:11:17.931323 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-03-25 06:11:17.931332 | orchestrator | 2026-03-25 06:11:17.931341 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-25 06:11:17.931349 | orchestrator | Wednesday 25 March 2026 06:11:15 +0000 (0:00:01.509) 1:03:32.882 ******* 2026-03-25 06:11:17.931363 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:12:31.137082 | orchestrator | 2026-03-25 06:12:31.137229 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-25 06:12:31.137273 | orchestrator | Wednesday 25 March 2026 06:11:17 +0000 (0:00:02.055) 1:03:34.937 ******* 2026-03-25 06:12:31.137286 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:12:31.137299 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-25 06:12:31.137311 | orchestrator | 2026-03-25 06:12:31.137322 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-25 06:12:31.137333 | orchestrator | Wednesday 25 March 2026 06:11:22 +0000 (0:00:04.848) 1:03:39.786 ******* 2026-03-25 06:12:31.137344 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:12:31.137356 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 06:12:31.137367 | orchestrator | 2026-03-25 06:12:31.137377 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-25 06:12:31.137388 | orchestrator | Wednesday 25 March 2026 06:11:25 +0000 (0:00:03.084) 1:03:42.871 ******* 2026-03-25 06:12:31.137400 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-25 06:12:31.137412 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:12:31.137424 | orchestrator | 2026-03-25 06:12:31.137434 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-25 06:12:31.137445 | orchestrator | Wednesday 25 March 2026 06:11:27 +0000 (0:00:01.968) 1:03:44.840 ******* 2026-03-25 06:12:31.137456 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-25 06:12:31.137467 | orchestrator | 2026-03-25 06:12:31.137477 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-25 06:12:31.137488 | orchestrator | Wednesday 25 March 2026 06:11:29 +0000 (0:00:01.587) 1:03:46.427 ******* 2026-03-25 06:12:31.137498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137554 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:12:31.137564 | orchestrator | 2026-03-25 06:12:31.137575 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-25 06:12:31.137586 | orchestrator | Wednesday 25 March 2026 06:11:31 +0000 (0:00:01.618) 1:03:48.045 ******* 2026-03-25 06:12:31.137661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:12:31.137716 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:12:31.137726 | orchestrator | 2026-03-25 06:12:31.137737 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-25 06:12:31.137757 | orchestrator | Wednesday 25 March 2026 06:11:32 +0000 (0:00:01.603) 1:03:49.649 ******* 2026-03-25 06:12:31.137768 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:12:31.137781 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:12:31.137792 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:12:31.137803 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:12:31.137816 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:12:31.137827 | orchestrator | 2026-03-25 06:12:31.137838 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-25 06:12:31.137868 | orchestrator | Wednesday 25 March 2026 06:12:03 +0000 (0:00:31.129) 1:04:20.778 ******* 2026-03-25 06:12:31.137879 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:12:31.137890 | orchestrator | 2026-03-25 06:12:31.137901 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-25 06:12:31.137912 | orchestrator | Wednesday 25 March 2026 06:12:04 +0000 (0:00:01.167) 1:04:21.945 ******* 2026-03-25 06:12:31.137923 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:12:31.137934 | orchestrator | 2026-03-25 06:12:31.137944 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-25 06:12:31.137955 | orchestrator | Wednesday 25 March 2026 06:12:06 +0000 (0:00:01.143) 1:04:23.089 ******* 2026-03-25 06:12:31.137966 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-03-25 06:12:31.137976 | orchestrator | 2026-03-25 06:12:31.137987 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-25 06:12:31.137997 | orchestrator | Wednesday 25 March 2026 06:12:07 +0000 (0:00:01.475) 1:04:24.564 ******* 2026-03-25 06:12:31.138008 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-03-25 06:12:31.138092 | orchestrator | 2026-03-25 06:12:31.138105 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-25 06:12:31.138115 | orchestrator | Wednesday 25 March 2026 06:12:09 +0000 (0:00:01.693) 1:04:26.257 ******* 2026-03-25 06:12:31.138126 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:12:31.138137 | orchestrator | 2026-03-25 06:12:31.138148 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-25 06:12:31.138158 | orchestrator | Wednesday 25 March 2026 06:12:11 +0000 (0:00:02.050) 1:04:28.308 ******* 2026-03-25 06:12:31.138169 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:12:31.138180 | orchestrator | 2026-03-25 06:12:31.138190 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-25 06:12:31.138201 | orchestrator | Wednesday 25 March 2026 06:12:13 +0000 (0:00:01.947) 1:04:30.256 ******* 2026-03-25 06:12:31.138211 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:12:31.138222 | orchestrator | 2026-03-25 06:12:31.138233 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-25 06:12:31.138243 | orchestrator | Wednesday 25 March 2026 06:12:15 +0000 (0:00:02.230) 1:04:32.487 ******* 2026-03-25 06:12:31.138254 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-25 06:12:31.138265 | orchestrator | 2026-03-25 06:12:31.138275 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-25 06:12:31.138286 | orchestrator | 2026-03-25 06:12:31.138297 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 06:12:31.138307 | orchestrator | Wednesday 25 March 2026 06:12:18 +0000 (0:00:02.820) 1:04:35.307 ******* 2026-03-25 06:12:31.138326 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-25 06:12:31.138337 | orchestrator | 2026-03-25 06:12:31.138348 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 06:12:31.138358 | orchestrator | Wednesday 25 March 2026 06:12:19 +0000 (0:00:01.131) 1:04:36.438 ******* 2026-03-25 06:12:31.138369 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:31.138380 | orchestrator | 2026-03-25 06:12:31.138390 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 06:12:31.138407 | orchestrator | Wednesday 25 March 2026 06:12:20 +0000 (0:00:01.500) 1:04:37.938 ******* 2026-03-25 06:12:31.138418 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:31.138431 | orchestrator | 2026-03-25 06:12:31.138450 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 06:12:31.138468 | orchestrator | Wednesday 25 March 2026 06:12:22 +0000 (0:00:01.163) 1:04:39.102 ******* 2026-03-25 06:12:31.138485 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:31.138503 | orchestrator | 2026-03-25 06:12:31.138521 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 06:12:31.138538 | orchestrator | Wednesday 25 March 2026 06:12:23 +0000 (0:00:01.437) 1:04:40.540 ******* 2026-03-25 06:12:31.138556 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:31.138573 | orchestrator | 2026-03-25 06:12:31.138643 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 06:12:31.138664 | orchestrator | Wednesday 25 March 2026 06:12:24 +0000 (0:00:01.161) 1:04:41.702 ******* 2026-03-25 06:12:31.138681 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:31.138699 | orchestrator | 2026-03-25 06:12:31.138718 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 06:12:31.138738 | orchestrator | Wednesday 25 March 2026 06:12:25 +0000 (0:00:01.179) 1:04:42.881 ******* 2026-03-25 06:12:31.138756 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:31.138774 | orchestrator | 2026-03-25 06:12:31.138789 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 06:12:31.138800 | orchestrator | Wednesday 25 March 2026 06:12:27 +0000 (0:00:01.167) 1:04:44.049 ******* 2026-03-25 06:12:31.138811 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:31.138821 | orchestrator | 2026-03-25 06:12:31.138832 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 06:12:31.138842 | orchestrator | Wednesday 25 March 2026 06:12:28 +0000 (0:00:01.174) 1:04:45.223 ******* 2026-03-25 06:12:31.138853 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:31.138864 | orchestrator | 2026-03-25 06:12:31.138874 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 06:12:31.138885 | orchestrator | Wednesday 25 March 2026 06:12:29 +0000 (0:00:01.152) 1:04:46.376 ******* 2026-03-25 06:12:31.138896 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:12:31.138906 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:12:31.138917 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:12:31.138928 | orchestrator | 2026-03-25 06:12:31.138938 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 06:12:31.138960 | orchestrator | Wednesday 25 March 2026 06:12:31 +0000 (0:00:01.760) 1:04:48.136 ******* 2026-03-25 06:12:57.052964 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:57.053097 | orchestrator | 2026-03-25 06:12:57.053111 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 06:12:57.053121 | orchestrator | Wednesday 25 March 2026 06:12:32 +0000 (0:00:01.270) 1:04:49.406 ******* 2026-03-25 06:12:57.053130 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:12:57.053140 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:12:57.053172 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:12:57.053181 | orchestrator | 2026-03-25 06:12:57.053189 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 06:12:57.053198 | orchestrator | Wednesday 25 March 2026 06:12:35 +0000 (0:00:02.983) 1:04:52.390 ******* 2026-03-25 06:12:57.053207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 06:12:57.053215 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 06:12:57.053222 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 06:12:57.053231 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:57.053238 | orchestrator | 2026-03-25 06:12:57.053246 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 06:12:57.053254 | orchestrator | Wednesday 25 March 2026 06:12:36 +0000 (0:00:01.428) 1:04:53.819 ******* 2026-03-25 06:12:57.053263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 06:12:57.053275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 06:12:57.053283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 06:12:57.053291 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:57.053299 | orchestrator | 2026-03-25 06:12:57.053307 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 06:12:57.053315 | orchestrator | Wednesday 25 March 2026 06:12:38 +0000 (0:00:02.028) 1:04:55.848 ******* 2026-03-25 06:12:57.053340 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:12:57.053353 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:12:57.053361 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:12:57.053370 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:57.053378 | orchestrator | 2026-03-25 06:12:57.053386 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 06:12:57.053393 | orchestrator | Wednesday 25 March 2026 06:12:40 +0000 (0:00:01.212) 1:04:57.060 ******* 2026-03-25 06:12:57.053420 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 06:12:32.982692', 'end': '2026-03-25 06:12:33.038153', 'delta': '0:00:00.055461', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 06:12:57.053438 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 06:12:33.588441', 'end': '2026-03-25 06:12:33.632689', 'delta': '0:00:00.044248', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 06:12:57.053447 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 06:12:34.158196', 'end': '2026-03-25 06:12:34.216185', 'delta': '0:00:00.057989', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 06:12:57.053456 | orchestrator | 2026-03-25 06:12:57.053464 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 06:12:57.053472 | orchestrator | Wednesday 25 March 2026 06:12:41 +0000 (0:00:01.225) 1:04:58.286 ******* 2026-03-25 06:12:57.053480 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:57.053489 | orchestrator | 2026-03-25 06:12:57.053498 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 06:12:57.053507 | orchestrator | Wednesday 25 March 2026 06:12:42 +0000 (0:00:01.275) 1:04:59.562 ******* 2026-03-25 06:12:57.053516 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:57.053546 | orchestrator | 2026-03-25 06:12:57.053555 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 06:12:57.053564 | orchestrator | Wednesday 25 March 2026 06:12:44 +0000 (0:00:01.740) 1:05:01.302 ******* 2026-03-25 06:12:57.053573 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:57.053582 | orchestrator | 2026-03-25 06:12:57.053596 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 06:12:57.053606 | orchestrator | Wednesday 25 March 2026 06:12:45 +0000 (0:00:01.167) 1:05:02.470 ******* 2026-03-25 06:12:57.053615 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-25 06:12:57.053624 | orchestrator | 2026-03-25 06:12:57.053632 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 06:12:57.053642 | orchestrator | Wednesday 25 March 2026 06:12:47 +0000 (0:00:02.041) 1:05:04.512 ******* 2026-03-25 06:12:57.053651 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:57.053660 | orchestrator | 2026-03-25 06:12:57.053669 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 06:12:57.053678 | orchestrator | Wednesday 25 March 2026 06:12:48 +0000 (0:00:01.229) 1:05:05.741 ******* 2026-03-25 06:12:57.053688 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:57.053697 | orchestrator | 2026-03-25 06:12:57.053706 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 06:12:57.053721 | orchestrator | Wednesday 25 March 2026 06:12:49 +0000 (0:00:01.176) 1:05:06.917 ******* 2026-03-25 06:12:57.053730 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:57.053739 | orchestrator | 2026-03-25 06:12:57.053748 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 06:12:57.053757 | orchestrator | Wednesday 25 March 2026 06:12:51 +0000 (0:00:01.244) 1:05:08.162 ******* 2026-03-25 06:12:57.053766 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:57.053775 | orchestrator | 2026-03-25 06:12:57.053785 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 06:12:57.053793 | orchestrator | Wednesday 25 March 2026 06:12:52 +0000 (0:00:01.213) 1:05:09.375 ******* 2026-03-25 06:12:57.053802 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:57.053811 | orchestrator | 2026-03-25 06:12:57.053820 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 06:12:57.053829 | orchestrator | Wednesday 25 March 2026 06:12:53 +0000 (0:00:01.133) 1:05:10.509 ******* 2026-03-25 06:12:57.053838 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:57.053846 | orchestrator | 2026-03-25 06:12:57.053854 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 06:12:57.053862 | orchestrator | Wednesday 25 March 2026 06:12:54 +0000 (0:00:01.258) 1:05:11.768 ******* 2026-03-25 06:12:57.053869 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:57.053877 | orchestrator | 2026-03-25 06:12:57.053885 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 06:12:57.053893 | orchestrator | Wednesday 25 March 2026 06:12:55 +0000 (0:00:01.111) 1:05:12.879 ******* 2026-03-25 06:12:57.053901 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:57.053908 | orchestrator | 2026-03-25 06:12:57.053916 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 06:12:57.053929 | orchestrator | Wednesday 25 March 2026 06:12:57 +0000 (0:00:01.177) 1:05:14.056 ******* 2026-03-25 06:12:59.557419 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:12:59.557601 | orchestrator | 2026-03-25 06:12:59.557621 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 06:12:59.557635 | orchestrator | Wednesday 25 March 2026 06:12:58 +0000 (0:00:01.128) 1:05:15.185 ******* 2026-03-25 06:12:59.557648 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:12:59.557660 | orchestrator | 2026-03-25 06:12:59.557671 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 06:12:59.557683 | orchestrator | Wednesday 25 March 2026 06:12:59 +0000 (0:00:01.156) 1:05:16.342 ******* 2026-03-25 06:12:59.557696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:12:59.557714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'uuids': ['1a1bfadf-e219-47e2-8705-0963963507ec'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq']}})  2026-03-25 06:12:59.557752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e1f7d9f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:12:59.557792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f']}})  2026-03-25 06:12:59.557806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:12:59.557818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:12:59.557852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 06:12:59.557867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:12:59.557878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp', 'dm-uuid-CRYPT-LUKS2-d0a28742b6dc46aab152442a6244f51b-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:12:59.557889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:12:59.557917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'uuids': ['d0a28742-b6dc-46aa-b152-442a6244f51b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp']}})  2026-03-25 06:12:59.557931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138']}})  2026-03-25 06:12:59.557944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:12:59.557972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cb51c54', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:13:00.971967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:13:00.972121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:13:00.972139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq', 'dm-uuid-CRYPT-LUKS2-1a1bfadfe21947e287050963963507ec-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:13:00.972155 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:00.972168 | orchestrator | 2026-03-25 06:13:00.972181 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 06:13:00.972193 | orchestrator | Wednesday 25 March 2026 06:13:00 +0000 (0:00:01.426) 1:05:17.768 ******* 2026-03-25 06:13:00.972206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:00.972219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138', 'dm-uuid-LVM-qi80GQE6Tcg1H1Qaou1HQKIw0Y18K2MMiRtObCOmMljlX3NyraHv57elKkc4U5Oq'], 'uuids': ['1a1bfadf-e219-47e2-8705-0963963507ec'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:00.972233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11', 'scsi-SQEMU_QEMU_HARDDISK_3e1f7d9f-c106-4693-b0da-d762a5de4a11'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e1f7d9f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:00.972300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CIqKvA-lt1d-4qQz-KNts-krwk-yQ0u-1PHslV', 'scsi-0QEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f', 'scsi-SQEMU_QEMU_HARDDISK_10d736b4-dcf8-42aa-aae6-a1381d72468f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:00.972317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:00.972328 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:00.972341 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:00.972352 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:00.972380 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp', 'dm-uuid-CRYPT-LUKS2-d0a28742b6dc46aab152442a6244f51b-X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:06.378119 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:06.378246 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--82366886--ea97--5dba--b5cd--187414e0593f-osd--block--82366886--ea97--5dba--b5cd--187414e0593f', 'dm-uuid-LVM-1B6VDGPSmmjj7HLdTGtTln0UtIEd11ZxX0sqLUd6idXl2rnpkfAOrMye3Xxtdnqp'], 'uuids': ['d0a28742-b6dc-46aa-b152-442a6244f51b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10d736b4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['X0sqLU-d6id-Xl2r-npkf-AOrM-ye3X-xtdnqp']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:06.378262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-d5kG3K-9osj-2aIh-xjKb-72Hm-d5Wn-f2zH7s', 'scsi-0QEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347', 'scsi-SQEMU_QEMU_HARDDISK_37f05188-2a00-44e2-a0b8-7549f9da5347'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37f05188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fa1f2bca--96f4--5f59--9dac--c3efdd146138-osd--block--fa1f2bca--96f4--5f59--9dac--c3efdd146138']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:06.378275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:06.378332 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cb51c54', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cb51c54-ae34-41ee-aa7a-55f1cdeeb529-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:06.378344 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:06.378353 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:06.378362 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq', 'dm-uuid-CRYPT-LUKS2-1a1bfadfe21947e287050963963507ec-iRtObC-OmMl-jlX3-Nyra-Hv57-elKk-c4U5Oq'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:13:06.378380 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:06.378390 | orchestrator | 2026-03-25 06:13:06.378400 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 06:13:06.378410 | orchestrator | Wednesday 25 March 2026 06:13:02 +0000 (0:00:01.417) 1:05:19.185 ******* 2026-03-25 06:13:06.378417 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:13:06.378427 | orchestrator | 2026-03-25 06:13:06.378434 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 06:13:06.378442 | orchestrator | Wednesday 25 March 2026 06:13:03 +0000 (0:00:01.480) 1:05:20.666 ******* 2026-03-25 06:13:06.378450 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:13:06.378458 | orchestrator | 2026-03-25 06:13:06.378466 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:13:06.378475 | orchestrator | Wednesday 25 March 2026 06:13:04 +0000 (0:00:01.130) 1:05:21.797 ******* 2026-03-25 06:13:06.378483 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:13:06.378490 | orchestrator | 2026-03-25 06:13:06.378538 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:13:06.378554 | orchestrator | Wednesday 25 March 2026 06:13:06 +0000 (0:00:01.590) 1:05:23.387 ******* 2026-03-25 06:13:48.498068 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498179 | orchestrator | 2026-03-25 06:13:48.498195 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:13:48.498206 | orchestrator | Wednesday 25 March 2026 06:13:07 +0000 (0:00:01.148) 1:05:24.535 ******* 2026-03-25 06:13:48.498215 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498223 | orchestrator | 2026-03-25 06:13:48.498232 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:13:48.498255 | orchestrator | Wednesday 25 March 2026 06:13:08 +0000 (0:00:01.254) 1:05:25.791 ******* 2026-03-25 06:13:48.498265 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498273 | orchestrator | 2026-03-25 06:13:48.498282 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 06:13:48.498291 | orchestrator | Wednesday 25 March 2026 06:13:09 +0000 (0:00:01.137) 1:05:26.928 ******* 2026-03-25 06:13:48.498301 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-25 06:13:48.498310 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-25 06:13:48.498319 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-25 06:13:48.498327 | orchestrator | 2026-03-25 06:13:48.498336 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 06:13:48.498345 | orchestrator | Wednesday 25 March 2026 06:13:11 +0000 (0:00:02.065) 1:05:28.993 ******* 2026-03-25 06:13:48.498354 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-25 06:13:48.498362 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-25 06:13:48.498371 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-25 06:13:48.498380 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498388 | orchestrator | 2026-03-25 06:13:48.498397 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 06:13:48.498438 | orchestrator | Wednesday 25 March 2026 06:13:13 +0000 (0:00:01.157) 1:05:30.151 ******* 2026-03-25 06:13:48.498447 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-25 06:13:48.498457 | orchestrator | 2026-03-25 06:13:48.498466 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:13:48.498476 | orchestrator | Wednesday 25 March 2026 06:13:14 +0000 (0:00:01.114) 1:05:31.265 ******* 2026-03-25 06:13:48.498485 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498512 | orchestrator | 2026-03-25 06:13:48.498521 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:13:48.498530 | orchestrator | Wednesday 25 March 2026 06:13:15 +0000 (0:00:01.211) 1:05:32.477 ******* 2026-03-25 06:13:48.498539 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498547 | orchestrator | 2026-03-25 06:13:48.498556 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:13:48.498566 | orchestrator | Wednesday 25 March 2026 06:13:16 +0000 (0:00:01.250) 1:05:33.727 ******* 2026-03-25 06:13:48.498575 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498585 | orchestrator | 2026-03-25 06:13:48.498595 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:13:48.498605 | orchestrator | Wednesday 25 March 2026 06:13:17 +0000 (0:00:01.167) 1:05:34.895 ******* 2026-03-25 06:13:48.498615 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:13:48.498625 | orchestrator | 2026-03-25 06:13:48.498635 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:13:48.498645 | orchestrator | Wednesday 25 March 2026 06:13:19 +0000 (0:00:01.263) 1:05:36.158 ******* 2026-03-25 06:13:48.498654 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:13:48.498665 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:13:48.498674 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:13:48.498684 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498694 | orchestrator | 2026-03-25 06:13:48.498703 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:13:48.498713 | orchestrator | Wednesday 25 March 2026 06:13:20 +0000 (0:00:01.444) 1:05:37.603 ******* 2026-03-25 06:13:48.498723 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:13:48.498732 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:13:48.498742 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:13:48.498752 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498762 | orchestrator | 2026-03-25 06:13:48.498771 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:13:48.498781 | orchestrator | Wednesday 25 March 2026 06:13:22 +0000 (0:00:01.424) 1:05:39.028 ******* 2026-03-25 06:13:48.498791 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:13:48.498800 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:13:48.498810 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:13:48.498820 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.498830 | orchestrator | 2026-03-25 06:13:48.498839 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:13:48.498849 | orchestrator | Wednesday 25 March 2026 06:13:23 +0000 (0:00:01.439) 1:05:40.467 ******* 2026-03-25 06:13:48.498859 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:13:48.498868 | orchestrator | 2026-03-25 06:13:48.498878 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:13:48.498888 | orchestrator | Wednesday 25 March 2026 06:13:24 +0000 (0:00:01.194) 1:05:41.662 ******* 2026-03-25 06:13:48.498898 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 06:13:48.498907 | orchestrator | 2026-03-25 06:13:48.498917 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 06:13:48.498926 | orchestrator | Wednesday 25 March 2026 06:13:25 +0000 (0:00:01.354) 1:05:43.016 ******* 2026-03-25 06:13:48.498953 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:13:48.498962 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:13:48.498971 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:13:48.498980 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 06:13:48.499000 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-25 06:13:48.499009 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 06:13:48.499018 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:13:48.499026 | orchestrator | 2026-03-25 06:13:48.499035 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 06:13:48.499044 | orchestrator | Wednesday 25 March 2026 06:13:28 +0000 (0:00:02.219) 1:05:45.236 ******* 2026-03-25 06:13:48.499052 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:13:48.499061 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:13:48.499069 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:13:48.499078 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 06:13:48.499086 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-25 06:13:48.499095 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-25 06:13:48.499103 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:13:48.499111 | orchestrator | 2026-03-25 06:13:48.499120 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-25 06:13:48.499128 | orchestrator | Wednesday 25 March 2026 06:13:30 +0000 (0:00:02.471) 1:05:47.707 ******* 2026-03-25 06:13:48.499137 | orchestrator | changed: [testbed-node-4] 2026-03-25 06:13:48.499146 | orchestrator | 2026-03-25 06:13:48.499154 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-25 06:13:48.499163 | orchestrator | Wednesday 25 March 2026 06:13:32 +0000 (0:00:01.960) 1:05:49.667 ******* 2026-03-25 06:13:48.499171 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:13:48.499180 | orchestrator | 2026-03-25 06:13:48.499188 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-25 06:13:48.499197 | orchestrator | Wednesday 25 March 2026 06:13:35 +0000 (0:00:02.386) 1:05:52.054 ******* 2026-03-25 06:13:48.499205 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:13:48.499214 | orchestrator | 2026-03-25 06:13:48.499223 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 06:13:48.499231 | orchestrator | Wednesday 25 March 2026 06:13:37 +0000 (0:00:01.978) 1:05:54.032 ******* 2026-03-25 06:13:48.499240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-25 06:13:48.499248 | orchestrator | 2026-03-25 06:13:48.499257 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 06:13:48.499265 | orchestrator | Wednesday 25 March 2026 06:13:38 +0000 (0:00:01.107) 1:05:55.140 ******* 2026-03-25 06:13:48.499274 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-25 06:13:48.499282 | orchestrator | 2026-03-25 06:13:48.499291 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 06:13:48.499299 | orchestrator | Wednesday 25 March 2026 06:13:39 +0000 (0:00:01.111) 1:05:56.251 ******* 2026-03-25 06:13:48.499308 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.499316 | orchestrator | 2026-03-25 06:13:48.499325 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 06:13:48.499333 | orchestrator | Wednesday 25 March 2026 06:13:40 +0000 (0:00:01.156) 1:05:57.407 ******* 2026-03-25 06:13:48.499342 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:13:48.499350 | orchestrator | 2026-03-25 06:13:48.499359 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 06:13:48.499367 | orchestrator | Wednesday 25 March 2026 06:13:41 +0000 (0:00:01.482) 1:05:58.890 ******* 2026-03-25 06:13:48.499382 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:13:48.499391 | orchestrator | 2026-03-25 06:13:48.499399 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 06:13:48.499428 | orchestrator | Wednesday 25 March 2026 06:13:43 +0000 (0:00:01.556) 1:06:00.447 ******* 2026-03-25 06:13:48.499437 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:13:48.499445 | orchestrator | 2026-03-25 06:13:48.499454 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 06:13:48.499463 | orchestrator | Wednesday 25 March 2026 06:13:45 +0000 (0:00:01.588) 1:06:02.036 ******* 2026-03-25 06:13:48.499471 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.499480 | orchestrator | 2026-03-25 06:13:48.499488 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 06:13:48.499497 | orchestrator | Wednesday 25 March 2026 06:13:46 +0000 (0:00:01.158) 1:06:03.194 ******* 2026-03-25 06:13:48.499505 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.499514 | orchestrator | 2026-03-25 06:13:48.499522 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 06:13:48.499531 | orchestrator | Wednesday 25 March 2026 06:13:47 +0000 (0:00:01.170) 1:06:04.365 ******* 2026-03-25 06:13:48.499539 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:13:48.499548 | orchestrator | 2026-03-25 06:13:48.499556 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 06:13:48.499571 | orchestrator | Wednesday 25 March 2026 06:13:48 +0000 (0:00:01.133) 1:06:05.498 ******* 2026-03-25 06:14:29.152966 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.153098 | orchestrator | 2026-03-25 06:14:29.153123 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 06:14:29.153141 | orchestrator | Wednesday 25 March 2026 06:13:50 +0000 (0:00:01.572) 1:06:07.071 ******* 2026-03-25 06:14:29.153156 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.153171 | orchestrator | 2026-03-25 06:14:29.153205 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 06:14:29.153221 | orchestrator | Wednesday 25 March 2026 06:13:51 +0000 (0:00:01.637) 1:06:08.708 ******* 2026-03-25 06:14:29.153237 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.153254 | orchestrator | 2026-03-25 06:14:29.153271 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 06:14:29.153293 | orchestrator | Wednesday 25 March 2026 06:13:52 +0000 (0:00:00.801) 1:06:09.509 ******* 2026-03-25 06:14:29.153309 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.153447 | orchestrator | 2026-03-25 06:14:29.153465 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 06:14:29.153482 | orchestrator | Wednesday 25 March 2026 06:13:53 +0000 (0:00:00.803) 1:06:10.313 ******* 2026-03-25 06:14:29.153498 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.153513 | orchestrator | 2026-03-25 06:14:29.153534 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 06:14:29.153550 | orchestrator | Wednesday 25 March 2026 06:13:54 +0000 (0:00:00.800) 1:06:11.113 ******* 2026-03-25 06:14:29.153567 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.153583 | orchestrator | 2026-03-25 06:14:29.153598 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 06:14:29.153612 | orchestrator | Wednesday 25 March 2026 06:13:54 +0000 (0:00:00.820) 1:06:11.934 ******* 2026-03-25 06:14:29.153622 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.153630 | orchestrator | 2026-03-25 06:14:29.153639 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 06:14:29.153648 | orchestrator | Wednesday 25 March 2026 06:13:55 +0000 (0:00:00.859) 1:06:12.793 ******* 2026-03-25 06:14:29.153656 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.153665 | orchestrator | 2026-03-25 06:14:29.153673 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 06:14:29.153682 | orchestrator | Wednesday 25 March 2026 06:13:56 +0000 (0:00:00.784) 1:06:13.578 ******* 2026-03-25 06:14:29.153734 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.153744 | orchestrator | 2026-03-25 06:14:29.153754 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 06:14:29.153769 | orchestrator | Wednesday 25 March 2026 06:13:57 +0000 (0:00:00.769) 1:06:14.348 ******* 2026-03-25 06:14:29.153784 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.153799 | orchestrator | 2026-03-25 06:14:29.153814 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 06:14:29.153831 | orchestrator | Wednesday 25 March 2026 06:13:58 +0000 (0:00:00.771) 1:06:15.120 ******* 2026-03-25 06:14:29.153847 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.153862 | orchestrator | 2026-03-25 06:14:29.153871 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 06:14:29.153880 | orchestrator | Wednesday 25 March 2026 06:13:58 +0000 (0:00:00.798) 1:06:15.918 ******* 2026-03-25 06:14:29.153889 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.153897 | orchestrator | 2026-03-25 06:14:29.153906 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 06:14:29.153915 | orchestrator | Wednesday 25 March 2026 06:13:59 +0000 (0:00:00.831) 1:06:16.750 ******* 2026-03-25 06:14:29.153923 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.153932 | orchestrator | 2026-03-25 06:14:29.153940 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 06:14:29.153949 | orchestrator | Wednesday 25 March 2026 06:14:00 +0000 (0:00:00.840) 1:06:17.590 ******* 2026-03-25 06:14:29.153957 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.153966 | orchestrator | 2026-03-25 06:14:29.153975 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 06:14:29.153983 | orchestrator | Wednesday 25 March 2026 06:14:01 +0000 (0:00:00.750) 1:06:18.341 ******* 2026-03-25 06:14:29.153992 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154000 | orchestrator | 2026-03-25 06:14:29.154009 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 06:14:29.154083 | orchestrator | Wednesday 25 March 2026 06:14:02 +0000 (0:00:00.856) 1:06:19.197 ******* 2026-03-25 06:14:29.154093 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154106 | orchestrator | 2026-03-25 06:14:29.154120 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 06:14:29.154135 | orchestrator | Wednesday 25 March 2026 06:14:02 +0000 (0:00:00.795) 1:06:19.993 ******* 2026-03-25 06:14:29.154147 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154161 | orchestrator | 2026-03-25 06:14:29.154186 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 06:14:29.154200 | orchestrator | Wednesday 25 March 2026 06:14:03 +0000 (0:00:00.770) 1:06:20.764 ******* 2026-03-25 06:14:29.154213 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154226 | orchestrator | 2026-03-25 06:14:29.154240 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 06:14:29.154253 | orchestrator | Wednesday 25 March 2026 06:14:04 +0000 (0:00:00.810) 1:06:21.574 ******* 2026-03-25 06:14:29.154267 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154281 | orchestrator | 2026-03-25 06:14:29.154296 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 06:14:29.154312 | orchestrator | Wednesday 25 March 2026 06:14:05 +0000 (0:00:00.862) 1:06:22.436 ******* 2026-03-25 06:14:29.154350 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154365 | orchestrator | 2026-03-25 06:14:29.154380 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 06:14:29.154395 | orchestrator | Wednesday 25 March 2026 06:14:06 +0000 (0:00:00.750) 1:06:23.187 ******* 2026-03-25 06:14:29.154410 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154425 | orchestrator | 2026-03-25 06:14:29.154465 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 06:14:29.154482 | orchestrator | Wednesday 25 March 2026 06:14:06 +0000 (0:00:00.781) 1:06:23.969 ******* 2026-03-25 06:14:29.154514 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154529 | orchestrator | 2026-03-25 06:14:29.154544 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 06:14:29.154569 | orchestrator | Wednesday 25 March 2026 06:14:07 +0000 (0:00:00.780) 1:06:24.749 ******* 2026-03-25 06:14:29.154584 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154599 | orchestrator | 2026-03-25 06:14:29.154613 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 06:14:29.154628 | orchestrator | Wednesday 25 March 2026 06:14:08 +0000 (0:00:00.786) 1:06:25.536 ******* 2026-03-25 06:14:29.154642 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154656 | orchestrator | 2026-03-25 06:14:29.154670 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 06:14:29.154684 | orchestrator | Wednesday 25 March 2026 06:14:09 +0000 (0:00:00.763) 1:06:26.299 ******* 2026-03-25 06:14:29.154698 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.154711 | orchestrator | 2026-03-25 06:14:29.154725 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 06:14:29.154739 | orchestrator | Wednesday 25 March 2026 06:14:10 +0000 (0:00:01.585) 1:06:27.884 ******* 2026-03-25 06:14:29.154753 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.154768 | orchestrator | 2026-03-25 06:14:29.154781 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 06:14:29.154796 | orchestrator | Wednesday 25 March 2026 06:14:12 +0000 (0:00:01.967) 1:06:29.852 ******* 2026-03-25 06:14:29.154810 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-25 06:14:29.154825 | orchestrator | 2026-03-25 06:14:29.154840 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 06:14:29.154854 | orchestrator | Wednesday 25 March 2026 06:14:14 +0000 (0:00:01.237) 1:06:31.090 ******* 2026-03-25 06:14:29.154869 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154884 | orchestrator | 2026-03-25 06:14:29.154898 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 06:14:29.154907 | orchestrator | Wednesday 25 March 2026 06:14:15 +0000 (0:00:01.239) 1:06:32.329 ******* 2026-03-25 06:14:29.154915 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.154924 | orchestrator | 2026-03-25 06:14:29.154932 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 06:14:29.154941 | orchestrator | Wednesday 25 March 2026 06:14:16 +0000 (0:00:01.196) 1:06:33.525 ******* 2026-03-25 06:14:29.154949 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 06:14:29.154958 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 06:14:29.154966 | orchestrator | 2026-03-25 06:14:29.154975 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 06:14:29.154983 | orchestrator | Wednesday 25 March 2026 06:14:18 +0000 (0:00:01.886) 1:06:35.412 ******* 2026-03-25 06:14:29.154992 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.155000 | orchestrator | 2026-03-25 06:14:29.155008 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 06:14:29.155017 | orchestrator | Wednesday 25 March 2026 06:14:19 +0000 (0:00:01.489) 1:06:36.901 ******* 2026-03-25 06:14:29.155025 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.155034 | orchestrator | 2026-03-25 06:14:29.155042 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 06:14:29.155051 | orchestrator | Wednesday 25 March 2026 06:14:21 +0000 (0:00:01.178) 1:06:38.080 ******* 2026-03-25 06:14:29.155059 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.155068 | orchestrator | 2026-03-25 06:14:29.155076 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 06:14:29.155085 | orchestrator | Wednesday 25 March 2026 06:14:21 +0000 (0:00:00.862) 1:06:38.943 ******* 2026-03-25 06:14:29.155103 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.155111 | orchestrator | 2026-03-25 06:14:29.155120 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 06:14:29.155128 | orchestrator | Wednesday 25 March 2026 06:14:22 +0000 (0:00:00.769) 1:06:39.713 ******* 2026-03-25 06:14:29.155137 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-25 06:14:29.155145 | orchestrator | 2026-03-25 06:14:29.155154 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 06:14:29.155162 | orchestrator | Wednesday 25 March 2026 06:14:23 +0000 (0:00:01.133) 1:06:40.846 ******* 2026-03-25 06:14:29.155171 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:14:29.155179 | orchestrator | 2026-03-25 06:14:29.155188 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 06:14:29.155196 | orchestrator | Wednesday 25 March 2026 06:14:25 +0000 (0:00:01.746) 1:06:42.592 ******* 2026-03-25 06:14:29.155205 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 06:14:29.155214 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 06:14:29.155222 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 06:14:29.155231 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.155239 | orchestrator | 2026-03-25 06:14:29.155248 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 06:14:29.155257 | orchestrator | Wednesday 25 March 2026 06:14:26 +0000 (0:00:01.155) 1:06:43.748 ******* 2026-03-25 06:14:29.155265 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.155273 | orchestrator | 2026-03-25 06:14:29.155282 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 06:14:29.155290 | orchestrator | Wednesday 25 March 2026 06:14:27 +0000 (0:00:01.255) 1:06:45.004 ******* 2026-03-25 06:14:29.155299 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:14:29.155308 | orchestrator | 2026-03-25 06:14:29.155353 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 06:15:11.968195 | orchestrator | Wednesday 25 March 2026 06:14:29 +0000 (0:00:01.153) 1:06:46.157 ******* 2026-03-25 06:15:11.968401 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.968423 | orchestrator | 2026-03-25 06:15:11.968436 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 06:15:11.968448 | orchestrator | Wednesday 25 March 2026 06:14:30 +0000 (0:00:01.151) 1:06:47.309 ******* 2026-03-25 06:15:11.968459 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.968470 | orchestrator | 2026-03-25 06:15:11.968481 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 06:15:11.968492 | orchestrator | Wednesday 25 March 2026 06:14:31 +0000 (0:00:01.141) 1:06:48.451 ******* 2026-03-25 06:15:11.968504 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.968514 | orchestrator | 2026-03-25 06:15:11.968525 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 06:15:11.968536 | orchestrator | Wednesday 25 March 2026 06:14:32 +0000 (0:00:00.794) 1:06:49.246 ******* 2026-03-25 06:15:11.968547 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:15:11.968559 | orchestrator | 2026-03-25 06:15:11.968693 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 06:15:11.968717 | orchestrator | Wednesday 25 March 2026 06:14:34 +0000 (0:00:02.113) 1:06:51.359 ******* 2026-03-25 06:15:11.968730 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:15:11.968742 | orchestrator | 2026-03-25 06:15:11.968755 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 06:15:11.968768 | orchestrator | Wednesday 25 March 2026 06:14:35 +0000 (0:00:00.794) 1:06:52.154 ******* 2026-03-25 06:15:11.968781 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-25 06:15:11.968793 | orchestrator | 2026-03-25 06:15:11.968805 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 06:15:11.968842 | orchestrator | Wednesday 25 March 2026 06:14:36 +0000 (0:00:01.140) 1:06:53.295 ******* 2026-03-25 06:15:11.968855 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.968868 | orchestrator | 2026-03-25 06:15:11.968880 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 06:15:11.968892 | orchestrator | Wednesday 25 March 2026 06:14:37 +0000 (0:00:01.141) 1:06:54.437 ******* 2026-03-25 06:15:11.968904 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.968916 | orchestrator | 2026-03-25 06:15:11.968926 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 06:15:11.968937 | orchestrator | Wednesday 25 March 2026 06:14:38 +0000 (0:00:01.193) 1:06:55.631 ******* 2026-03-25 06:15:11.968948 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.968958 | orchestrator | 2026-03-25 06:15:11.968969 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 06:15:11.968980 | orchestrator | Wednesday 25 March 2026 06:14:39 +0000 (0:00:01.192) 1:06:56.824 ******* 2026-03-25 06:15:11.968990 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969001 | orchestrator | 2026-03-25 06:15:11.969012 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 06:15:11.969022 | orchestrator | Wednesday 25 March 2026 06:14:40 +0000 (0:00:01.157) 1:06:57.981 ******* 2026-03-25 06:15:11.969033 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969043 | orchestrator | 2026-03-25 06:15:11.969054 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 06:15:11.969064 | orchestrator | Wednesday 25 March 2026 06:14:42 +0000 (0:00:01.215) 1:06:59.196 ******* 2026-03-25 06:15:11.969075 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969086 | orchestrator | 2026-03-25 06:15:11.969096 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 06:15:11.969107 | orchestrator | Wednesday 25 March 2026 06:14:43 +0000 (0:00:01.190) 1:07:00.386 ******* 2026-03-25 06:15:11.969118 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969128 | orchestrator | 2026-03-25 06:15:11.969139 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 06:15:11.969150 | orchestrator | Wednesday 25 March 2026 06:14:44 +0000 (0:00:01.160) 1:07:01.547 ******* 2026-03-25 06:15:11.969160 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969171 | orchestrator | 2026-03-25 06:15:11.969199 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 06:15:11.969211 | orchestrator | Wednesday 25 March 2026 06:14:45 +0000 (0:00:01.171) 1:07:02.719 ******* 2026-03-25 06:15:11.969264 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:15:11.969275 | orchestrator | 2026-03-25 06:15:11.969286 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 06:15:11.969297 | orchestrator | Wednesday 25 March 2026 06:14:46 +0000 (0:00:00.818) 1:07:03.538 ******* 2026-03-25 06:15:11.969308 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-25 06:15:11.969320 | orchestrator | 2026-03-25 06:15:11.969330 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 06:15:11.969341 | orchestrator | Wednesday 25 March 2026 06:14:47 +0000 (0:00:01.127) 1:07:04.665 ******* 2026-03-25 06:15:11.969352 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-25 06:15:11.969363 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-25 06:15:11.969374 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-25 06:15:11.969384 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-25 06:15:11.969395 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-25 06:15:11.969406 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-25 06:15:11.969416 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-25 06:15:11.969427 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-25 06:15:11.969447 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 06:15:11.969458 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 06:15:11.969468 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 06:15:11.969502 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 06:15:11.969513 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 06:15:11.969524 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 06:15:11.969540 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-25 06:15:11.969551 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-25 06:15:11.969562 | orchestrator | 2026-03-25 06:15:11.969573 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 06:15:11.969584 | orchestrator | Wednesday 25 March 2026 06:14:53 +0000 (0:00:06.154) 1:07:10.820 ******* 2026-03-25 06:15:11.969594 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-25 06:15:11.969605 | orchestrator | 2026-03-25 06:15:11.969616 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-25 06:15:11.969626 | orchestrator | Wednesday 25 March 2026 06:14:54 +0000 (0:00:01.118) 1:07:11.938 ******* 2026-03-25 06:15:11.969637 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:15:11.969650 | orchestrator | 2026-03-25 06:15:11.969661 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-25 06:15:11.969671 | orchestrator | Wednesday 25 March 2026 06:14:56 +0000 (0:00:01.630) 1:07:13.569 ******* 2026-03-25 06:15:11.969682 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:15:11.969693 | orchestrator | 2026-03-25 06:15:11.969703 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 06:15:11.969714 | orchestrator | Wednesday 25 March 2026 06:14:58 +0000 (0:00:01.612) 1:07:15.181 ******* 2026-03-25 06:15:11.969725 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969735 | orchestrator | 2026-03-25 06:15:11.969746 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 06:15:11.969757 | orchestrator | Wednesday 25 March 2026 06:14:59 +0000 (0:00:00.934) 1:07:16.116 ******* 2026-03-25 06:15:11.969767 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969778 | orchestrator | 2026-03-25 06:15:11.969788 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 06:15:11.969799 | orchestrator | Wednesday 25 March 2026 06:14:59 +0000 (0:00:00.816) 1:07:16.933 ******* 2026-03-25 06:15:11.969810 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969821 | orchestrator | 2026-03-25 06:15:11.969831 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 06:15:11.969842 | orchestrator | Wednesday 25 March 2026 06:15:00 +0000 (0:00:00.799) 1:07:17.733 ******* 2026-03-25 06:15:11.969853 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969863 | orchestrator | 2026-03-25 06:15:11.969874 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 06:15:11.969885 | orchestrator | Wednesday 25 March 2026 06:15:01 +0000 (0:00:00.820) 1:07:18.554 ******* 2026-03-25 06:15:11.969895 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969906 | orchestrator | 2026-03-25 06:15:11.969916 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 06:15:11.969927 | orchestrator | Wednesday 25 March 2026 06:15:02 +0000 (0:00:00.801) 1:07:19.355 ******* 2026-03-25 06:15:11.969938 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969949 | orchestrator | 2026-03-25 06:15:11.969959 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 06:15:11.969970 | orchestrator | Wednesday 25 March 2026 06:15:03 +0000 (0:00:00.781) 1:07:20.136 ******* 2026-03-25 06:15:11.969986 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.969997 | orchestrator | 2026-03-25 06:15:11.970007 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 06:15:11.970111 | orchestrator | Wednesday 25 March 2026 06:15:03 +0000 (0:00:00.803) 1:07:20.940 ******* 2026-03-25 06:15:11.970123 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.970133 | orchestrator | 2026-03-25 06:15:11.970144 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 06:15:11.970155 | orchestrator | Wednesday 25 March 2026 06:15:04 +0000 (0:00:00.778) 1:07:21.718 ******* 2026-03-25 06:15:11.970165 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.970176 | orchestrator | 2026-03-25 06:15:11.970187 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 06:15:11.970198 | orchestrator | Wednesday 25 March 2026 06:15:05 +0000 (0:00:00.861) 1:07:22.580 ******* 2026-03-25 06:15:11.970208 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.970219 | orchestrator | 2026-03-25 06:15:11.970264 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 06:15:11.970288 | orchestrator | Wednesday 25 March 2026 06:15:06 +0000 (0:00:00.758) 1:07:23.339 ******* 2026-03-25 06:15:11.970314 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:11.970332 | orchestrator | 2026-03-25 06:15:11.970349 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 06:15:11.970367 | orchestrator | Wednesday 25 March 2026 06:15:07 +0000 (0:00:00.773) 1:07:24.113 ******* 2026-03-25 06:15:11.970386 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-25 06:15:11.970404 | orchestrator | 2026-03-25 06:15:11.970423 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 06:15:11.970441 | orchestrator | Wednesday 25 March 2026 06:15:11 +0000 (0:00:04.033) 1:07:28.147 ******* 2026-03-25 06:15:11.970459 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:15:11.970477 | orchestrator | 2026-03-25 06:15:11.970506 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 06:15:53.577723 | orchestrator | Wednesday 25 March 2026 06:15:11 +0000 (0:00:00.826) 1:07:28.974 ******* 2026-03-25 06:15:53.577843 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-25 06:15:53.577856 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-25 06:15:53.577863 | orchestrator | 2026-03-25 06:15:53.577870 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 06:15:53.577876 | orchestrator | Wednesday 25 March 2026 06:15:17 +0000 (0:00:05.085) 1:07:34.059 ******* 2026-03-25 06:15:53.577881 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.577888 | orchestrator | 2026-03-25 06:15:53.577894 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 06:15:53.577899 | orchestrator | Wednesday 25 March 2026 06:15:18 +0000 (0:00:00.959) 1:07:35.018 ******* 2026-03-25 06:15:53.577905 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.577910 | orchestrator | 2026-03-25 06:15:53.577916 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:15:53.577923 | orchestrator | Wednesday 25 March 2026 06:15:18 +0000 (0:00:00.847) 1:07:35.866 ******* 2026-03-25 06:15:53.577948 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.577958 | orchestrator | 2026-03-25 06:15:53.577968 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:15:53.577977 | orchestrator | Wednesday 25 March 2026 06:15:19 +0000 (0:00:00.812) 1:07:36.678 ******* 2026-03-25 06:15:53.577987 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.577996 | orchestrator | 2026-03-25 06:15:53.578006 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:15:53.578064 | orchestrator | Wednesday 25 March 2026 06:15:20 +0000 (0:00:00.827) 1:07:37.506 ******* 2026-03-25 06:15:53.578076 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.578086 | orchestrator | 2026-03-25 06:15:53.578096 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:15:53.578103 | orchestrator | Wednesday 25 March 2026 06:15:21 +0000 (0:00:00.775) 1:07:38.281 ******* 2026-03-25 06:15:53.578108 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:15:53.578115 | orchestrator | 2026-03-25 06:15:53.578120 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:15:53.578126 | orchestrator | Wednesday 25 March 2026 06:15:22 +0000 (0:00:00.904) 1:07:39.186 ******* 2026-03-25 06:15:53.578131 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:15:53.578137 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:15:53.578142 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:15:53.578147 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.578176 | orchestrator | 2026-03-25 06:15:53.578181 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:15:53.578186 | orchestrator | Wednesday 25 March 2026 06:15:23 +0000 (0:00:01.089) 1:07:40.276 ******* 2026-03-25 06:15:53.578192 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:15:53.578197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:15:53.578203 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:15:53.578208 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.578213 | orchestrator | 2026-03-25 06:15:53.578219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:15:53.578224 | orchestrator | Wednesday 25 March 2026 06:15:24 +0000 (0:00:01.093) 1:07:41.369 ******* 2026-03-25 06:15:53.578230 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-25 06:15:53.578235 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-25 06:15:53.578240 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-25 06:15:53.578246 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.578251 | orchestrator | 2026-03-25 06:15:53.578256 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:15:53.578262 | orchestrator | Wednesday 25 March 2026 06:15:25 +0000 (0:00:01.143) 1:07:42.513 ******* 2026-03-25 06:15:53.578268 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:15:53.578273 | orchestrator | 2026-03-25 06:15:53.578278 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:15:53.578284 | orchestrator | Wednesday 25 March 2026 06:15:26 +0000 (0:00:00.835) 1:07:43.349 ******* 2026-03-25 06:15:53.578289 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-25 06:15:53.578295 | orchestrator | 2026-03-25 06:15:53.578300 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 06:15:53.578309 | orchestrator | Wednesday 25 March 2026 06:15:27 +0000 (0:00:01.018) 1:07:44.367 ******* 2026-03-25 06:15:53.578319 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:15:53.578328 | orchestrator | 2026-03-25 06:15:53.578338 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-25 06:15:53.578347 | orchestrator | Wednesday 25 March 2026 06:15:28 +0000 (0:00:01.597) 1:07:45.965 ******* 2026-03-25 06:15:53.578358 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-03-25 06:15:53.578375 | orchestrator | 2026-03-25 06:15:53.578401 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-25 06:15:53.578412 | orchestrator | Wednesday 25 March 2026 06:15:30 +0000 (0:00:01.209) 1:07:47.174 ******* 2026-03-25 06:15:53.578421 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:15:53.578438 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-25 06:15:53.578447 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 06:15:53.578457 | orchestrator | 2026-03-25 06:15:53.578466 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-25 06:15:53.578477 | orchestrator | Wednesday 25 March 2026 06:15:33 +0000 (0:00:03.277) 1:07:50.452 ******* 2026-03-25 06:15:53.578486 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-25 06:15:53.578496 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-25 06:15:53.578501 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:15:53.578507 | orchestrator | 2026-03-25 06:15:53.578512 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-25 06:15:53.578517 | orchestrator | Wednesday 25 March 2026 06:15:35 +0000 (0:00:01.990) 1:07:52.443 ******* 2026-03-25 06:15:53.578523 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.578528 | orchestrator | 2026-03-25 06:15:53.578535 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-25 06:15:53.578544 | orchestrator | Wednesday 25 March 2026 06:15:36 +0000 (0:00:00.766) 1:07:53.209 ******* 2026-03-25 06:15:53.578553 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-03-25 06:15:53.578563 | orchestrator | 2026-03-25 06:15:53.578573 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-25 06:15:53.578582 | orchestrator | Wednesday 25 March 2026 06:15:37 +0000 (0:00:01.149) 1:07:54.358 ******* 2026-03-25 06:15:53.578591 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:15:53.578602 | orchestrator | 2026-03-25 06:15:53.578611 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-25 06:15:53.578621 | orchestrator | Wednesday 25 March 2026 06:15:38 +0000 (0:00:01.623) 1:07:55.982 ******* 2026-03-25 06:15:53.578630 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:15:53.578639 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-25 06:15:53.578648 | orchestrator | 2026-03-25 06:15:53.578657 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-25 06:15:53.578665 | orchestrator | Wednesday 25 March 2026 06:15:44 +0000 (0:00:05.163) 1:08:01.145 ******* 2026-03-25 06:15:53.578674 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:15:53.578682 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 06:15:53.578690 | orchestrator | 2026-03-25 06:15:53.578698 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-25 06:15:53.578708 | orchestrator | Wednesday 25 March 2026 06:15:47 +0000 (0:00:03.208) 1:08:04.354 ******* 2026-03-25 06:15:53.578717 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-25 06:15:53.578726 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:15:53.578735 | orchestrator | 2026-03-25 06:15:53.578745 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-25 06:15:53.578752 | orchestrator | Wednesday 25 March 2026 06:15:48 +0000 (0:00:01.621) 1:08:05.975 ******* 2026-03-25 06:15:53.578757 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-03-25 06:15:53.578763 | orchestrator | 2026-03-25 06:15:53.578768 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-25 06:15:53.578782 | orchestrator | Wednesday 25 March 2026 06:15:50 +0000 (0:00:01.327) 1:08:07.303 ******* 2026-03-25 06:15:53.578792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:15:53.578802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:15:53.578808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:15:53.578815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:15:53.578824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:15:53.578834 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:15:53.578839 | orchestrator | 2026-03-25 06:15:53.578845 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-25 06:15:53.578850 | orchestrator | Wednesday 25 March 2026 06:15:51 +0000 (0:00:01.610) 1:08:08.913 ******* 2026-03-25 06:15:53.578855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:15:53.578861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:15:53.578866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:15:53.578877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:17:00.050981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:17:00.051101 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:17:00.051111 | orchestrator | 2026-03-25 06:17:00.051118 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-25 06:17:00.051124 | orchestrator | Wednesday 25 March 2026 06:15:53 +0000 (0:00:01.661) 1:08:10.575 ******* 2026-03-25 06:17:00.051130 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:17:00.051136 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:17:00.051142 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:17:00.051147 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:17:00.051153 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:17:00.051158 | orchestrator | 2026-03-25 06:17:00.051164 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-25 06:17:00.051169 | orchestrator | Wednesday 25 March 2026 06:16:25 +0000 (0:00:31.695) 1:08:42.271 ******* 2026-03-25 06:17:00.051174 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:17:00.051179 | orchestrator | 2026-03-25 06:17:00.051184 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-25 06:17:00.051189 | orchestrator | Wednesday 25 March 2026 06:16:26 +0000 (0:00:00.785) 1:08:43.057 ******* 2026-03-25 06:17:00.051193 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:17:00.051198 | orchestrator | 2026-03-25 06:17:00.051203 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-25 06:17:00.051223 | orchestrator | Wednesday 25 March 2026 06:16:26 +0000 (0:00:00.786) 1:08:43.843 ******* 2026-03-25 06:17:00.051228 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-03-25 06:17:00.051233 | orchestrator | 2026-03-25 06:17:00.051238 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-25 06:17:00.051243 | orchestrator | Wednesday 25 March 2026 06:16:27 +0000 (0:00:01.123) 1:08:44.967 ******* 2026-03-25 06:17:00.051247 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-03-25 06:17:00.051252 | orchestrator | 2026-03-25 06:17:00.051257 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-25 06:17:00.051262 | orchestrator | Wednesday 25 March 2026 06:16:29 +0000 (0:00:01.132) 1:08:46.099 ******* 2026-03-25 06:17:00.051266 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:17:00.051272 | orchestrator | 2026-03-25 06:17:00.051276 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-25 06:17:00.051281 | orchestrator | Wednesday 25 March 2026 06:16:31 +0000 (0:00:02.072) 1:08:48.172 ******* 2026-03-25 06:17:00.051286 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:17:00.051291 | orchestrator | 2026-03-25 06:17:00.051295 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-25 06:17:00.051300 | orchestrator | Wednesday 25 March 2026 06:16:33 +0000 (0:00:01.912) 1:08:50.085 ******* 2026-03-25 06:17:00.051305 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:17:00.051310 | orchestrator | 2026-03-25 06:17:00.051314 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-25 06:17:00.051319 | orchestrator | Wednesday 25 March 2026 06:16:35 +0000 (0:00:02.189) 1:08:52.274 ******* 2026-03-25 06:17:00.051324 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-25 06:17:00.051329 | orchestrator | 2026-03-25 06:17:00.051333 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-25 06:17:00.051338 | orchestrator | 2026-03-25 06:17:00.051343 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 06:17:00.051348 | orchestrator | Wednesday 25 March 2026 06:16:38 +0000 (0:00:03.143) 1:08:55.418 ******* 2026-03-25 06:17:00.051352 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-25 06:17:00.051357 | orchestrator | 2026-03-25 06:17:00.051362 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-25 06:17:00.051367 | orchestrator | Wednesday 25 March 2026 06:16:39 +0000 (0:00:01.128) 1:08:56.546 ******* 2026-03-25 06:17:00.051371 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:00.051376 | orchestrator | 2026-03-25 06:17:00.051381 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-25 06:17:00.051386 | orchestrator | Wednesday 25 March 2026 06:16:40 +0000 (0:00:01.467) 1:08:58.014 ******* 2026-03-25 06:17:00.051390 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:00.051395 | orchestrator | 2026-03-25 06:17:00.051400 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 06:17:00.051405 | orchestrator | Wednesday 25 March 2026 06:16:42 +0000 (0:00:01.150) 1:08:59.165 ******* 2026-03-25 06:17:00.051409 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:00.051414 | orchestrator | 2026-03-25 06:17:00.051419 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 06:17:00.051424 | orchestrator | Wednesday 25 March 2026 06:16:43 +0000 (0:00:01.460) 1:09:00.625 ******* 2026-03-25 06:17:00.051429 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:00.051434 | orchestrator | 2026-03-25 06:17:00.051450 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-25 06:17:00.051460 | orchestrator | Wednesday 25 March 2026 06:16:44 +0000 (0:00:01.185) 1:09:01.811 ******* 2026-03-25 06:17:00.051465 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:00.051469 | orchestrator | 2026-03-25 06:17:00.051478 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-25 06:17:00.051483 | orchestrator | Wednesday 25 March 2026 06:16:45 +0000 (0:00:01.142) 1:09:02.954 ******* 2026-03-25 06:17:00.051488 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:00.051493 | orchestrator | 2026-03-25 06:17:00.051497 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-25 06:17:00.051502 | orchestrator | Wednesday 25 March 2026 06:16:47 +0000 (0:00:01.205) 1:09:04.160 ******* 2026-03-25 06:17:00.051507 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:00.051512 | orchestrator | 2026-03-25 06:17:00.051517 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-25 06:17:00.051521 | orchestrator | Wednesday 25 March 2026 06:16:48 +0000 (0:00:01.167) 1:09:05.327 ******* 2026-03-25 06:17:00.051526 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:00.051532 | orchestrator | 2026-03-25 06:17:00.051538 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-25 06:17:00.051543 | orchestrator | Wednesday 25 March 2026 06:16:49 +0000 (0:00:01.138) 1:09:06.466 ******* 2026-03-25 06:17:00.051549 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:17:00.051555 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:17:00.051560 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:17:00.051566 | orchestrator | 2026-03-25 06:17:00.051571 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-25 06:17:00.051577 | orchestrator | Wednesday 25 March 2026 06:16:51 +0000 (0:00:02.134) 1:09:08.601 ******* 2026-03-25 06:17:00.051583 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:00.051588 | orchestrator | 2026-03-25 06:17:00.051594 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-25 06:17:00.051600 | orchestrator | Wednesday 25 March 2026 06:16:52 +0000 (0:00:01.321) 1:09:09.922 ******* 2026-03-25 06:17:00.051605 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:17:00.051611 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:17:00.051616 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:17:00.051622 | orchestrator | 2026-03-25 06:17:00.051627 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-25 06:17:00.051633 | orchestrator | Wednesday 25 March 2026 06:16:55 +0000 (0:00:02.907) 1:09:12.830 ******* 2026-03-25 06:17:00.051639 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-25 06:17:00.051645 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-25 06:17:00.051650 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-25 06:17:00.051656 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:00.051662 | orchestrator | 2026-03-25 06:17:00.051667 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-25 06:17:00.051673 | orchestrator | Wednesday 25 March 2026 06:16:57 +0000 (0:00:01.435) 1:09:14.266 ******* 2026-03-25 06:17:00.051680 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-25 06:17:00.051688 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-25 06:17:00.051694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-25 06:17:00.051703 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:00.051709 | orchestrator | 2026-03-25 06:17:00.051715 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-25 06:17:00.051720 | orchestrator | Wednesday 25 March 2026 06:16:58 +0000 (0:00:01.631) 1:09:15.897 ******* 2026-03-25 06:17:00.051728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:00.051739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:19.437364 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:19.437464 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:19.437476 | orchestrator | 2026-03-25 06:17:19.437486 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-25 06:17:19.437495 | orchestrator | Wednesday 25 March 2026 06:17:00 +0000 (0:00:01.158) 1:09:17.056 ******* 2026-03-25 06:17:19.437505 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f2f4f0f2e000', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-25 06:16:53.471579', 'end': '2026-03-25 06:16:53.517696', 'delta': '0:00:00.046117', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2f4f0f2e000'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-25 06:17:19.437517 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '04618a84c691', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-25 06:16:54.050824', 'end': '2026-03-25 06:16:54.110519', 'delta': '0:00:00.059695', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04618a84c691'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-25 06:17:19.437525 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'da72f46e99c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-25 06:16:54.629855', 'end': '2026-03-25 06:16:54.679719', 'delta': '0:00:00.049864', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da72f46e99c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-25 06:17:19.437551 | orchestrator | 2026-03-25 06:17:19.437559 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-25 06:17:19.437567 | orchestrator | Wednesday 25 March 2026 06:17:01 +0000 (0:00:01.234) 1:09:18.290 ******* 2026-03-25 06:17:19.437575 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:19.437584 | orchestrator | 2026-03-25 06:17:19.437592 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-25 06:17:19.437600 | orchestrator | Wednesday 25 March 2026 06:17:02 +0000 (0:00:01.274) 1:09:19.565 ******* 2026-03-25 06:17:19.437608 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:19.437616 | orchestrator | 2026-03-25 06:17:19.437623 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-25 06:17:19.437631 | orchestrator | Wednesday 25 March 2026 06:17:03 +0000 (0:00:01.292) 1:09:20.857 ******* 2026-03-25 06:17:19.437639 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:19.437647 | orchestrator | 2026-03-25 06:17:19.437655 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-25 06:17:19.437662 | orchestrator | Wednesday 25 March 2026 06:17:04 +0000 (0:00:01.140) 1:09:21.998 ******* 2026-03-25 06:17:19.437670 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-25 06:17:19.437678 | orchestrator | 2026-03-25 06:17:19.437686 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 06:17:19.437694 | orchestrator | Wednesday 25 March 2026 06:17:06 +0000 (0:00:01.963) 1:09:23.962 ******* 2026-03-25 06:17:19.437702 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:19.437710 | orchestrator | 2026-03-25 06:17:19.437717 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-25 06:17:19.437725 | orchestrator | Wednesday 25 March 2026 06:17:08 +0000 (0:00:01.180) 1:09:25.142 ******* 2026-03-25 06:17:19.437745 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:19.437754 | orchestrator | 2026-03-25 06:17:19.437766 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-25 06:17:19.437774 | orchestrator | Wednesday 25 March 2026 06:17:09 +0000 (0:00:01.119) 1:09:26.262 ******* 2026-03-25 06:17:19.437782 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:19.437790 | orchestrator | 2026-03-25 06:17:19.437798 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-25 06:17:19.437806 | orchestrator | Wednesday 25 March 2026 06:17:10 +0000 (0:00:01.703) 1:09:27.965 ******* 2026-03-25 06:17:19.437813 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:19.437821 | orchestrator | 2026-03-25 06:17:19.437829 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-25 06:17:19.437837 | orchestrator | Wednesday 25 March 2026 06:17:12 +0000 (0:00:01.141) 1:09:29.107 ******* 2026-03-25 06:17:19.437844 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:19.437852 | orchestrator | 2026-03-25 06:17:19.437860 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-25 06:17:19.437868 | orchestrator | Wednesday 25 March 2026 06:17:13 +0000 (0:00:01.142) 1:09:30.250 ******* 2026-03-25 06:17:19.437876 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:19.437883 | orchestrator | 2026-03-25 06:17:19.437891 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-25 06:17:19.437900 | orchestrator | Wednesday 25 March 2026 06:17:14 +0000 (0:00:01.171) 1:09:31.422 ******* 2026-03-25 06:17:19.437909 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:19.437918 | orchestrator | 2026-03-25 06:17:19.437927 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-25 06:17:19.437936 | orchestrator | Wednesday 25 March 2026 06:17:15 +0000 (0:00:01.171) 1:09:32.594 ******* 2026-03-25 06:17:19.437945 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:19.437955 | orchestrator | 2026-03-25 06:17:19.437970 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-25 06:17:19.437979 | orchestrator | Wednesday 25 March 2026 06:17:16 +0000 (0:00:01.250) 1:09:33.845 ******* 2026-03-25 06:17:19.437988 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:19.437997 | orchestrator | 2026-03-25 06:17:19.438088 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-25 06:17:19.438100 | orchestrator | Wednesday 25 March 2026 06:17:17 +0000 (0:00:01.151) 1:09:34.997 ******* 2026-03-25 06:17:19.438109 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:19.438119 | orchestrator | 2026-03-25 06:17:19.438128 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-25 06:17:19.438137 | orchestrator | Wednesday 25 March 2026 06:17:19 +0000 (0:00:01.207) 1:09:36.204 ******* 2026-03-25 06:17:19.438147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:17:19.438158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'uuids': ['e67f6cc7-d6f8-4138-9e65-f811c858cad0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI']}})  2026-03-25 06:17:19.438169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82545a3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:17:19.438191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060']}})  2026-03-25 06:17:20.614716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:17:20.614829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:17:20.614871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-25 06:17:20.614892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:17:20.614909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X', 'dm-uuid-CRYPT-LUKS2-306c9f3fcb174ac6ad8e271da2bf30e2-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:17:20.614926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:17:20.614943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'uuids': ['306c9f3f-cb17-4ac6-ad8e-271da2bf30e2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X']}})  2026-03-25 06:17:20.615109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269']}})  2026-03-25 06:17:20.615135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:17:20.615159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0ceb4511', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-25 06:17:20.615177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:17:20.615195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-25 06:17:20.615230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI', 'dm-uuid-CRYPT-LUKS2-e67f6cc7d6f841389e65f811c858cad0-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-25 06:17:20.838795 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:20.838895 | orchestrator | 2026-03-25 06:17:20.838912 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-25 06:17:20.838925 | orchestrator | Wednesday 25 March 2026 06:17:20 +0000 (0:00:01.424) 1:09:37.629 ******* 2026-03-25 06:17:20.838939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.838954 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269', 'dm-uuid-LVM-AjTepPC9YBwKeu38Jf1R7NGMBGxHD64b1bYlOV1jbrUHbIYS3hAMWkKb5QrnOpnI'], 'uuids': ['e67f6cc7-d6f8-4138-9e65-f811c858cad0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.838968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519', 'scsi-SQEMU_QEMU_HARDDISK_82545a3e-e213-461e-98f1-90cf18f03519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82545a3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.838981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-to62r3-CyRH-TR4y-N8rR-DKBC-8SUV-NrvEkE', 'scsi-0QEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29', 'scsi-SQEMU_QEMU_HARDDISK_04cbe055-706b-4644-9107-d77d79be5a29'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.839112 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.839155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.839167 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-25-01-43-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.839179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.839191 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X', 'dm-uuid-CRYPT-LUKS2-306c9f3fcb174ac6ad8e271da2bf30e2-UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.839202 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:20.839228 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f303e98e--56ea--50bc--9e1c--3ccda4672060-osd--block--f303e98e--56ea--50bc--9e1c--3ccda4672060', 'dm-uuid-LVM-UU9fet4LjPs1QLROYR3DS61lWfbcudTJUiFeyHJNagHuqxrmYCAPg3v2ocgFP63X'], 'uuids': ['306c9f3f-cb17-4ac6-ad8e-271da2bf30e2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04cbe055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UiFeyH-JNag-Huqx-rmYC-APg3-v2oc-gFP63X']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:33.914710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FUT1Bq-riIG-e3wV-m2Zc-DHH8-HB53-ximoP3', 'scsi-0QEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7', 'scsi-SQEMU_QEMU_HARDDISK_fd5367dc-993e-4d7d-b2a6-757e2a17e9b7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd5367dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ec576d5--4336--523a--896e--5358117b2269-osd--block--8ec576d5--4336--523a--896e--5358117b2269']}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:33.914830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:33.914866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0ceb4511', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ceb4511-88da-4cb0-8dd1-61d4a7cc2ad2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:33.915062 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:33.915081 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:33.915094 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI', 'dm-uuid-CRYPT-LUKS2-e67f6cc7d6f841389e65f811c858cad0-1bYlOV-1jbr-UHbI-YS3h-AMWk-Kb5Q-rnOpnI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-25 06:17:33.915106 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:33.915120 | orchestrator | 2026-03-25 06:17:33.915132 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-25 06:17:33.915144 | orchestrator | Wednesday 25 March 2026 06:17:22 +0000 (0:00:01.417) 1:09:39.047 ******* 2026-03-25 06:17:33.915155 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:33.915167 | orchestrator | 2026-03-25 06:17:33.915178 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-25 06:17:33.915188 | orchestrator | Wednesday 25 March 2026 06:17:23 +0000 (0:00:01.461) 1:09:40.508 ******* 2026-03-25 06:17:33.915199 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:33.915210 | orchestrator | 2026-03-25 06:17:33.915221 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:17:33.915232 | orchestrator | Wednesday 25 March 2026 06:17:24 +0000 (0:00:01.125) 1:09:41.633 ******* 2026-03-25 06:17:33.915245 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:17:33.915257 | orchestrator | 2026-03-25 06:17:33.915269 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:17:33.915281 | orchestrator | Wednesday 25 March 2026 06:17:26 +0000 (0:00:01.480) 1:09:43.114 ******* 2026-03-25 06:17:33.915293 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:33.915304 | orchestrator | 2026-03-25 06:17:33.915316 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-25 06:17:33.915328 | orchestrator | Wednesday 25 March 2026 06:17:27 +0000 (0:00:01.216) 1:09:44.331 ******* 2026-03-25 06:17:33.915349 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:33.915362 | orchestrator | 2026-03-25 06:17:33.915374 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-25 06:17:33.915387 | orchestrator | Wednesday 25 March 2026 06:17:28 +0000 (0:00:01.373) 1:09:45.705 ******* 2026-03-25 06:17:33.915399 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:33.915411 | orchestrator | 2026-03-25 06:17:33.915423 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-25 06:17:33.915435 | orchestrator | Wednesday 25 March 2026 06:17:29 +0000 (0:00:01.236) 1:09:46.942 ******* 2026-03-25 06:17:33.915472 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-25 06:17:33.915485 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-25 06:17:33.915497 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-25 06:17:33.915509 | orchestrator | 2026-03-25 06:17:33.915521 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-25 06:17:33.915539 | orchestrator | Wednesday 25 March 2026 06:17:31 +0000 (0:00:01.676) 1:09:48.618 ******* 2026-03-25 06:17:33.915551 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-25 06:17:33.915563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-25 06:17:33.915576 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-25 06:17:33.915588 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:17:33.915599 | orchestrator | 2026-03-25 06:17:33.915610 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-25 06:17:33.915621 | orchestrator | Wednesday 25 March 2026 06:17:32 +0000 (0:00:01.169) 1:09:49.788 ******* 2026-03-25 06:17:33.915632 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-25 06:17:33.915643 | orchestrator | 2026-03-25 06:17:33.915663 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:18:16.504128 | orchestrator | Wednesday 25 March 2026 06:17:33 +0000 (0:00:01.132) 1:09:50.920 ******* 2026-03-25 06:18:16.504274 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.504300 | orchestrator | 2026-03-25 06:18:16.504313 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:18:16.504325 | orchestrator | Wednesday 25 March 2026 06:17:35 +0000 (0:00:01.163) 1:09:52.083 ******* 2026-03-25 06:18:16.504335 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.504346 | orchestrator | 2026-03-25 06:18:16.504357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:18:16.504368 | orchestrator | Wednesday 25 March 2026 06:17:36 +0000 (0:00:01.182) 1:09:53.266 ******* 2026-03-25 06:18:16.504379 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.504390 | orchestrator | 2026-03-25 06:18:16.504400 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:18:16.504411 | orchestrator | Wednesday 25 March 2026 06:17:37 +0000 (0:00:01.169) 1:09:54.435 ******* 2026-03-25 06:18:16.504422 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.504433 | orchestrator | 2026-03-25 06:18:16.504444 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:18:16.504455 | orchestrator | Wednesday 25 March 2026 06:17:38 +0000 (0:00:01.222) 1:09:55.658 ******* 2026-03-25 06:18:16.504466 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 06:18:16.504476 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 06:18:16.504487 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 06:18:16.504497 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.504508 | orchestrator | 2026-03-25 06:18:16.504519 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:18:16.504529 | orchestrator | Wednesday 25 March 2026 06:17:40 +0000 (0:00:01.404) 1:09:57.063 ******* 2026-03-25 06:18:16.504565 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 06:18:16.504577 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 06:18:16.504587 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 06:18:16.504598 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.504608 | orchestrator | 2026-03-25 06:18:16.504619 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:18:16.504630 | orchestrator | Wednesday 25 March 2026 06:17:41 +0000 (0:00:01.796) 1:09:58.859 ******* 2026-03-25 06:18:16.504641 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 06:18:16.504652 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 06:18:16.504664 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 06:18:16.504676 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.504688 | orchestrator | 2026-03-25 06:18:16.504700 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:18:16.504713 | orchestrator | Wednesday 25 March 2026 06:17:43 +0000 (0:00:01.835) 1:10:00.694 ******* 2026-03-25 06:18:16.504725 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.504737 | orchestrator | 2026-03-25 06:18:16.504749 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:18:16.504761 | orchestrator | Wednesday 25 March 2026 06:17:44 +0000 (0:00:01.289) 1:10:01.984 ******* 2026-03-25 06:18:16.504774 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 06:18:16.504786 | orchestrator | 2026-03-25 06:18:16.504798 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-25 06:18:16.504810 | orchestrator | Wednesday 25 March 2026 06:17:46 +0000 (0:00:01.363) 1:10:03.348 ******* 2026-03-25 06:18:16.504823 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:18:16.504835 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:18:16.504848 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:18:16.504860 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 06:18:16.504873 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 06:18:16.504886 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-25 06:18:16.504898 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:18:16.504910 | orchestrator | 2026-03-25 06:18:16.504965 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-25 06:18:16.504980 | orchestrator | Wednesday 25 March 2026 06:17:48 +0000 (0:00:01.923) 1:10:05.272 ******* 2026-03-25 06:18:16.504993 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-25 06:18:16.505005 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-25 06:18:16.505032 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-25 06:18:16.505043 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-25 06:18:16.505053 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-25 06:18:16.505064 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-25 06:18:16.505075 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-25 06:18:16.505085 | orchestrator | 2026-03-25 06:18:16.505096 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-25 06:18:16.505106 | orchestrator | Wednesday 25 March 2026 06:17:50 +0000 (0:00:02.403) 1:10:07.675 ******* 2026-03-25 06:18:16.505117 | orchestrator | changed: [testbed-node-5] 2026-03-25 06:18:16.505127 | orchestrator | 2026-03-25 06:18:16.505156 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-25 06:18:16.505177 | orchestrator | Wednesday 25 March 2026 06:17:52 +0000 (0:00:01.878) 1:10:09.554 ******* 2026-03-25 06:18:16.505189 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:18:16.505201 | orchestrator | 2026-03-25 06:18:16.505212 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-25 06:18:16.505222 | orchestrator | Wednesday 25 March 2026 06:17:54 +0000 (0:00:02.367) 1:10:11.921 ******* 2026-03-25 06:18:16.505233 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:18:16.505244 | orchestrator | 2026-03-25 06:18:16.505255 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 06:18:16.505265 | orchestrator | Wednesday 25 March 2026 06:17:56 +0000 (0:00:01.939) 1:10:13.861 ******* 2026-03-25 06:18:16.505276 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-25 06:18:16.505286 | orchestrator | 2026-03-25 06:18:16.505297 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 06:18:16.505307 | orchestrator | Wednesday 25 March 2026 06:17:58 +0000 (0:00:01.158) 1:10:15.020 ******* 2026-03-25 06:18:16.505318 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-25 06:18:16.505329 | orchestrator | 2026-03-25 06:18:16.505340 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 06:18:16.505350 | orchestrator | Wednesday 25 March 2026 06:17:59 +0000 (0:00:01.136) 1:10:16.156 ******* 2026-03-25 06:18:16.505361 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.505371 | orchestrator | 2026-03-25 06:18:16.505382 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 06:18:16.505393 | orchestrator | Wednesday 25 March 2026 06:18:00 +0000 (0:00:01.254) 1:10:17.411 ******* 2026-03-25 06:18:16.505403 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.505414 | orchestrator | 2026-03-25 06:18:16.505425 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 06:18:16.505435 | orchestrator | Wednesday 25 March 2026 06:18:01 +0000 (0:00:01.537) 1:10:18.948 ******* 2026-03-25 06:18:16.505446 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.505456 | orchestrator | 2026-03-25 06:18:16.505467 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 06:18:16.505477 | orchestrator | Wednesday 25 March 2026 06:18:03 +0000 (0:00:01.535) 1:10:20.484 ******* 2026-03-25 06:18:16.505488 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.505499 | orchestrator | 2026-03-25 06:18:16.505509 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 06:18:16.505520 | orchestrator | Wednesday 25 March 2026 06:18:05 +0000 (0:00:01.554) 1:10:22.038 ******* 2026-03-25 06:18:16.505530 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.505541 | orchestrator | 2026-03-25 06:18:16.505552 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 06:18:16.505562 | orchestrator | Wednesday 25 March 2026 06:18:06 +0000 (0:00:01.162) 1:10:23.200 ******* 2026-03-25 06:18:16.505572 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.505583 | orchestrator | 2026-03-25 06:18:16.505598 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 06:18:16.505616 | orchestrator | Wednesday 25 March 2026 06:18:07 +0000 (0:00:01.177) 1:10:24.378 ******* 2026-03-25 06:18:16.505633 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.505650 | orchestrator | 2026-03-25 06:18:16.505668 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 06:18:16.505685 | orchestrator | Wednesday 25 March 2026 06:18:08 +0000 (0:00:01.142) 1:10:25.521 ******* 2026-03-25 06:18:16.505704 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.505722 | orchestrator | 2026-03-25 06:18:16.505733 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 06:18:16.505754 | orchestrator | Wednesday 25 March 2026 06:18:10 +0000 (0:00:01.568) 1:10:27.090 ******* 2026-03-25 06:18:16.505772 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.505789 | orchestrator | 2026-03-25 06:18:16.505806 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 06:18:16.505824 | orchestrator | Wednesday 25 March 2026 06:18:11 +0000 (0:00:01.551) 1:10:28.642 ******* 2026-03-25 06:18:16.505842 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.505860 | orchestrator | 2026-03-25 06:18:16.505879 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 06:18:16.505898 | orchestrator | Wednesday 25 March 2026 06:18:12 +0000 (0:00:00.771) 1:10:29.413 ******* 2026-03-25 06:18:16.505919 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.505969 | orchestrator | 2026-03-25 06:18:16.505990 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 06:18:16.506011 | orchestrator | Wednesday 25 March 2026 06:18:13 +0000 (0:00:00.824) 1:10:30.238 ******* 2026-03-25 06:18:16.506120 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.506132 | orchestrator | 2026-03-25 06:18:16.506152 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 06:18:16.506163 | orchestrator | Wednesday 25 March 2026 06:18:14 +0000 (0:00:00.796) 1:10:31.035 ******* 2026-03-25 06:18:16.506174 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.506185 | orchestrator | 2026-03-25 06:18:16.506196 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 06:18:16.506207 | orchestrator | Wednesday 25 March 2026 06:18:14 +0000 (0:00:00.800) 1:10:31.835 ******* 2026-03-25 06:18:16.506217 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:16.506228 | orchestrator | 2026-03-25 06:18:16.506239 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 06:18:16.506249 | orchestrator | Wednesday 25 March 2026 06:18:15 +0000 (0:00:00.815) 1:10:32.651 ******* 2026-03-25 06:18:16.506260 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:16.506271 | orchestrator | 2026-03-25 06:18:16.506294 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 06:18:57.086328 | orchestrator | Wednesday 25 March 2026 06:18:16 +0000 (0:00:00.857) 1:10:33.509 ******* 2026-03-25 06:18:57.086440 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086455 | orchestrator | 2026-03-25 06:18:57.086467 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 06:18:57.086477 | orchestrator | Wednesday 25 March 2026 06:18:17 +0000 (0:00:00.821) 1:10:34.330 ******* 2026-03-25 06:18:57.086487 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086497 | orchestrator | 2026-03-25 06:18:57.086506 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 06:18:57.086516 | orchestrator | Wednesday 25 March 2026 06:18:18 +0000 (0:00:00.793) 1:10:35.124 ******* 2026-03-25 06:18:57.086526 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:57.086536 | orchestrator | 2026-03-25 06:18:57.086546 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 06:18:57.086556 | orchestrator | Wednesday 25 March 2026 06:18:18 +0000 (0:00:00.813) 1:10:35.937 ******* 2026-03-25 06:18:57.086565 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:57.086575 | orchestrator | 2026-03-25 06:18:57.086584 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-25 06:18:57.086594 | orchestrator | Wednesday 25 March 2026 06:18:19 +0000 (0:00:00.853) 1:10:36.790 ******* 2026-03-25 06:18:57.086603 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086613 | orchestrator | 2026-03-25 06:18:57.086623 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-25 06:18:57.086632 | orchestrator | Wednesday 25 March 2026 06:18:20 +0000 (0:00:00.828) 1:10:37.619 ******* 2026-03-25 06:18:57.086641 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086652 | orchestrator | 2026-03-25 06:18:57.086662 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-25 06:18:57.086691 | orchestrator | Wednesday 25 March 2026 06:18:21 +0000 (0:00:00.767) 1:10:38.387 ******* 2026-03-25 06:18:57.086701 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086711 | orchestrator | 2026-03-25 06:18:57.086720 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-25 06:18:57.086730 | orchestrator | Wednesday 25 March 2026 06:18:22 +0000 (0:00:00.769) 1:10:39.156 ******* 2026-03-25 06:18:57.086739 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086749 | orchestrator | 2026-03-25 06:18:57.086758 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-25 06:18:57.086768 | orchestrator | Wednesday 25 March 2026 06:18:22 +0000 (0:00:00.793) 1:10:39.950 ******* 2026-03-25 06:18:57.086777 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086787 | orchestrator | 2026-03-25 06:18:57.086797 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-25 06:18:57.086806 | orchestrator | Wednesday 25 March 2026 06:18:23 +0000 (0:00:00.754) 1:10:40.704 ******* 2026-03-25 06:18:57.086816 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086825 | orchestrator | 2026-03-25 06:18:57.086835 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-25 06:18:57.086845 | orchestrator | Wednesday 25 March 2026 06:18:24 +0000 (0:00:00.772) 1:10:41.477 ******* 2026-03-25 06:18:57.086854 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086864 | orchestrator | 2026-03-25 06:18:57.086909 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-25 06:18:57.086921 | orchestrator | Wednesday 25 March 2026 06:18:25 +0000 (0:00:00.848) 1:10:42.326 ******* 2026-03-25 06:18:57.086933 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086944 | orchestrator | 2026-03-25 06:18:57.086955 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-25 06:18:57.086966 | orchestrator | Wednesday 25 March 2026 06:18:26 +0000 (0:00:00.866) 1:10:43.193 ******* 2026-03-25 06:18:57.086978 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.086989 | orchestrator | 2026-03-25 06:18:57.087000 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-25 06:18:57.087012 | orchestrator | Wednesday 25 March 2026 06:18:26 +0000 (0:00:00.789) 1:10:43.982 ******* 2026-03-25 06:18:57.087023 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087033 | orchestrator | 2026-03-25 06:18:57.087045 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-25 06:18:57.087057 | orchestrator | Wednesday 25 March 2026 06:18:27 +0000 (0:00:00.769) 1:10:44.752 ******* 2026-03-25 06:18:57.087068 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087079 | orchestrator | 2026-03-25 06:18:57.087090 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-25 06:18:57.087102 | orchestrator | Wednesday 25 March 2026 06:18:28 +0000 (0:00:00.763) 1:10:45.516 ******* 2026-03-25 06:18:57.087113 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087124 | orchestrator | 2026-03-25 06:18:57.087135 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-25 06:18:57.087146 | orchestrator | Wednesday 25 March 2026 06:18:29 +0000 (0:00:00.775) 1:10:46.291 ******* 2026-03-25 06:18:57.087157 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:57.087168 | orchestrator | 2026-03-25 06:18:57.087179 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-25 06:18:57.087205 | orchestrator | Wednesday 25 March 2026 06:18:30 +0000 (0:00:01.601) 1:10:47.893 ******* 2026-03-25 06:18:57.087217 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:57.087229 | orchestrator | 2026-03-25 06:18:57.087239 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-25 06:18:57.087249 | orchestrator | Wednesday 25 March 2026 06:18:32 +0000 (0:00:01.848) 1:10:49.742 ******* 2026-03-25 06:18:57.087262 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-25 06:18:57.087281 | orchestrator | 2026-03-25 06:18:57.087291 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-25 06:18:57.087300 | orchestrator | Wednesday 25 March 2026 06:18:33 +0000 (0:00:01.122) 1:10:50.864 ******* 2026-03-25 06:18:57.087310 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087320 | orchestrator | 2026-03-25 06:18:57.087330 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-25 06:18:57.087354 | orchestrator | Wednesday 25 March 2026 06:18:35 +0000 (0:00:01.179) 1:10:52.044 ******* 2026-03-25 06:18:57.087364 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087374 | orchestrator | 2026-03-25 06:18:57.087384 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-25 06:18:57.087394 | orchestrator | Wednesday 25 March 2026 06:18:36 +0000 (0:00:01.134) 1:10:53.179 ******* 2026-03-25 06:18:57.087403 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-25 06:18:57.087413 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-25 06:18:57.087423 | orchestrator | 2026-03-25 06:18:57.087432 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-25 06:18:57.087442 | orchestrator | Wednesday 25 March 2026 06:18:37 +0000 (0:00:01.805) 1:10:54.984 ******* 2026-03-25 06:18:57.087452 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:57.087461 | orchestrator | 2026-03-25 06:18:57.087471 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-25 06:18:57.087481 | orchestrator | Wednesday 25 March 2026 06:18:39 +0000 (0:00:01.587) 1:10:56.572 ******* 2026-03-25 06:18:57.087490 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087500 | orchestrator | 2026-03-25 06:18:57.087510 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-25 06:18:57.087519 | orchestrator | Wednesday 25 March 2026 06:18:40 +0000 (0:00:01.169) 1:10:57.742 ******* 2026-03-25 06:18:57.087529 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087538 | orchestrator | 2026-03-25 06:18:57.087548 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-25 06:18:57.087557 | orchestrator | Wednesday 25 March 2026 06:18:41 +0000 (0:00:00.793) 1:10:58.536 ******* 2026-03-25 06:18:57.087567 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087577 | orchestrator | 2026-03-25 06:18:57.087586 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-25 06:18:57.087596 | orchestrator | Wednesday 25 March 2026 06:18:42 +0000 (0:00:00.774) 1:10:59.310 ******* 2026-03-25 06:18:57.087606 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-25 06:18:57.087616 | orchestrator | 2026-03-25 06:18:57.087625 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-25 06:18:57.087635 | orchestrator | Wednesday 25 March 2026 06:18:43 +0000 (0:00:01.130) 1:11:00.441 ******* 2026-03-25 06:18:57.087645 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:57.087654 | orchestrator | 2026-03-25 06:18:57.087664 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-25 06:18:57.087674 | orchestrator | Wednesday 25 March 2026 06:18:45 +0000 (0:00:01.700) 1:11:02.141 ******* 2026-03-25 06:18:57.087683 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-25 06:18:57.087693 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-25 06:18:57.087703 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-25 06:18:57.087712 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087722 | orchestrator | 2026-03-25 06:18:57.087732 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-25 06:18:57.087741 | orchestrator | Wednesday 25 March 2026 06:18:46 +0000 (0:00:01.160) 1:11:03.302 ******* 2026-03-25 06:18:57.087751 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087760 | orchestrator | 2026-03-25 06:18:57.087780 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-25 06:18:57.087790 | orchestrator | Wednesday 25 March 2026 06:18:47 +0000 (0:00:01.174) 1:11:04.476 ******* 2026-03-25 06:18:57.087799 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087809 | orchestrator | 2026-03-25 06:18:57.087819 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-25 06:18:57.087828 | orchestrator | Wednesday 25 March 2026 06:18:48 +0000 (0:00:01.227) 1:11:05.703 ******* 2026-03-25 06:18:57.087838 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087847 | orchestrator | 2026-03-25 06:18:57.087857 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-25 06:18:57.087890 | orchestrator | Wednesday 25 March 2026 06:18:49 +0000 (0:00:01.162) 1:11:06.866 ******* 2026-03-25 06:18:57.087907 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087921 | orchestrator | 2026-03-25 06:18:57.087937 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-25 06:18:57.087954 | orchestrator | Wednesday 25 March 2026 06:18:51 +0000 (0:00:01.187) 1:11:08.053 ******* 2026-03-25 06:18:57.087971 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.087986 | orchestrator | 2026-03-25 06:18:57.087999 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-25 06:18:57.088009 | orchestrator | Wednesday 25 March 2026 06:18:51 +0000 (0:00:00.807) 1:11:08.861 ******* 2026-03-25 06:18:57.088018 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:57.088028 | orchestrator | 2026-03-25 06:18:57.088038 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-25 06:18:57.088053 | orchestrator | Wednesday 25 March 2026 06:18:53 +0000 (0:00:02.112) 1:11:10.974 ******* 2026-03-25 06:18:57.088063 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:18:57.088072 | orchestrator | 2026-03-25 06:18:57.088082 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-25 06:18:57.088091 | orchestrator | Wednesday 25 March 2026 06:18:54 +0000 (0:00:00.861) 1:11:11.835 ******* 2026-03-25 06:18:57.088101 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-25 06:18:57.088111 | orchestrator | 2026-03-25 06:18:57.088120 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-25 06:18:57.088130 | orchestrator | Wednesday 25 March 2026 06:18:55 +0000 (0:00:01.122) 1:11:12.957 ******* 2026-03-25 06:18:57.088139 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:18:57.088149 | orchestrator | 2026-03-25 06:18:57.088159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-25 06:18:57.088175 | orchestrator | Wednesday 25 March 2026 06:18:57 +0000 (0:00:01.136) 1:11:14.094 ******* 2026-03-25 06:19:38.600802 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.600955 | orchestrator | 2026-03-25 06:19:38.600971 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-25 06:19:38.600984 | orchestrator | Wednesday 25 March 2026 06:18:58 +0000 (0:00:01.207) 1:11:15.301 ******* 2026-03-25 06:19:38.600994 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601003 | orchestrator | 2026-03-25 06:19:38.601013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-25 06:19:38.601022 | orchestrator | Wednesday 25 March 2026 06:18:59 +0000 (0:00:01.184) 1:11:16.485 ******* 2026-03-25 06:19:38.601032 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601041 | orchestrator | 2026-03-25 06:19:38.601051 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-25 06:19:38.601060 | orchestrator | Wednesday 25 March 2026 06:19:00 +0000 (0:00:01.219) 1:11:17.705 ******* 2026-03-25 06:19:38.601070 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601079 | orchestrator | 2026-03-25 06:19:38.601088 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-25 06:19:38.601098 | orchestrator | Wednesday 25 March 2026 06:19:01 +0000 (0:00:01.153) 1:11:18.858 ******* 2026-03-25 06:19:38.601107 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601139 | orchestrator | 2026-03-25 06:19:38.601149 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-25 06:19:38.601159 | orchestrator | Wednesday 25 March 2026 06:19:02 +0000 (0:00:01.149) 1:11:20.008 ******* 2026-03-25 06:19:38.601168 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601178 | orchestrator | 2026-03-25 06:19:38.601187 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-25 06:19:38.601196 | orchestrator | Wednesday 25 March 2026 06:19:04 +0000 (0:00:01.138) 1:11:21.147 ******* 2026-03-25 06:19:38.601205 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601215 | orchestrator | 2026-03-25 06:19:38.601224 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-25 06:19:38.601233 | orchestrator | Wednesday 25 March 2026 06:19:05 +0000 (0:00:01.124) 1:11:22.271 ******* 2026-03-25 06:19:38.601243 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:19:38.601253 | orchestrator | 2026-03-25 06:19:38.601262 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-25 06:19:38.601272 | orchestrator | Wednesday 25 March 2026 06:19:06 +0000 (0:00:00.843) 1:11:23.115 ******* 2026-03-25 06:19:38.601281 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-25 06:19:38.601291 | orchestrator | 2026-03-25 06:19:38.601301 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-25 06:19:38.601310 | orchestrator | Wednesday 25 March 2026 06:19:07 +0000 (0:00:01.269) 1:11:24.385 ******* 2026-03-25 06:19:38.601320 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-25 06:19:38.601329 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-25 06:19:38.601339 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-25 06:19:38.601348 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-25 06:19:38.601357 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-25 06:19:38.601366 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-25 06:19:38.601375 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-25 06:19:38.601385 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-25 06:19:38.601394 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-25 06:19:38.601403 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-25 06:19:38.601413 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-25 06:19:38.601422 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-25 06:19:38.601431 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-25 06:19:38.601441 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-25 06:19:38.601450 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-25 06:19:38.601459 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-25 06:19:38.601468 | orchestrator | 2026-03-25 06:19:38.601478 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-25 06:19:38.601487 | orchestrator | Wednesday 25 March 2026 06:19:13 +0000 (0:00:06.214) 1:11:30.599 ******* 2026-03-25 06:19:38.601496 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-25 06:19:38.601506 | orchestrator | 2026-03-25 06:19:38.601515 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-25 06:19:38.601524 | orchestrator | Wednesday 25 March 2026 06:19:14 +0000 (0:00:01.167) 1:11:31.767 ******* 2026-03-25 06:19:38.601547 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:19:38.601557 | orchestrator | 2026-03-25 06:19:38.601567 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-25 06:19:38.601576 | orchestrator | Wednesday 25 March 2026 06:19:16 +0000 (0:00:01.535) 1:11:33.302 ******* 2026-03-25 06:19:38.601593 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:19:38.601602 | orchestrator | 2026-03-25 06:19:38.601612 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-25 06:19:38.601621 | orchestrator | Wednesday 25 March 2026 06:19:17 +0000 (0:00:01.657) 1:11:34.960 ******* 2026-03-25 06:19:38.601630 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601640 | orchestrator | 2026-03-25 06:19:38.601649 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-25 06:19:38.601674 | orchestrator | Wednesday 25 March 2026 06:19:18 +0000 (0:00:00.772) 1:11:35.732 ******* 2026-03-25 06:19:38.601684 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601694 | orchestrator | 2026-03-25 06:19:38.601703 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-25 06:19:38.601713 | orchestrator | Wednesday 25 March 2026 06:19:19 +0000 (0:00:00.791) 1:11:36.524 ******* 2026-03-25 06:19:38.601722 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601732 | orchestrator | 2026-03-25 06:19:38.601741 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-25 06:19:38.601751 | orchestrator | Wednesday 25 March 2026 06:19:20 +0000 (0:00:00.826) 1:11:37.351 ******* 2026-03-25 06:19:38.601760 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601770 | orchestrator | 2026-03-25 06:19:38.601779 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-25 06:19:38.601788 | orchestrator | Wednesday 25 March 2026 06:19:21 +0000 (0:00:00.791) 1:11:38.143 ******* 2026-03-25 06:19:38.601798 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601807 | orchestrator | 2026-03-25 06:19:38.601859 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-25 06:19:38.601890 | orchestrator | Wednesday 25 March 2026 06:19:21 +0000 (0:00:00.781) 1:11:38.924 ******* 2026-03-25 06:19:38.601909 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601923 | orchestrator | 2026-03-25 06:19:38.601933 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-25 06:19:38.601942 | orchestrator | Wednesday 25 March 2026 06:19:22 +0000 (0:00:00.780) 1:11:39.705 ******* 2026-03-25 06:19:38.601952 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601961 | orchestrator | 2026-03-25 06:19:38.601970 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-25 06:19:38.601980 | orchestrator | Wednesday 25 March 2026 06:19:23 +0000 (0:00:00.864) 1:11:40.570 ******* 2026-03-25 06:19:38.601989 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.601998 | orchestrator | 2026-03-25 06:19:38.602008 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-25 06:19:38.602079 | orchestrator | Wednesday 25 March 2026 06:19:24 +0000 (0:00:00.845) 1:11:41.415 ******* 2026-03-25 06:19:38.602092 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.602101 | orchestrator | 2026-03-25 06:19:38.602111 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-25 06:19:38.602120 | orchestrator | Wednesday 25 March 2026 06:19:25 +0000 (0:00:00.782) 1:11:42.197 ******* 2026-03-25 06:19:38.602130 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.602139 | orchestrator | 2026-03-25 06:19:38.602180 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-25 06:19:38.602191 | orchestrator | Wednesday 25 March 2026 06:19:25 +0000 (0:00:00.784) 1:11:42.982 ******* 2026-03-25 06:19:38.602200 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.602210 | orchestrator | 2026-03-25 06:19:38.602219 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-25 06:19:38.602229 | orchestrator | Wednesday 25 March 2026 06:19:26 +0000 (0:00:00.862) 1:11:43.845 ******* 2026-03-25 06:19:38.602238 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-25 06:19:38.602257 | orchestrator | 2026-03-25 06:19:38.602267 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-25 06:19:38.602276 | orchestrator | Wednesday 25 March 2026 06:19:30 +0000 (0:00:03.948) 1:11:47.794 ******* 2026-03-25 06:19:38.602286 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:19:38.602295 | orchestrator | 2026-03-25 06:19:38.602305 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-25 06:19:38.602315 | orchestrator | Wednesday 25 March 2026 06:19:31 +0000 (0:00:00.823) 1:11:48.617 ******* 2026-03-25 06:19:38.602327 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-25 06:19:38.602341 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-25 06:19:38.602353 | orchestrator | 2026-03-25 06:19:38.602371 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-25 06:19:38.602382 | orchestrator | Wednesday 25 March 2026 06:19:36 +0000 (0:00:04.587) 1:11:53.205 ******* 2026-03-25 06:19:38.602393 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.602404 | orchestrator | 2026-03-25 06:19:38.602414 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-25 06:19:38.602425 | orchestrator | Wednesday 25 March 2026 06:19:36 +0000 (0:00:00.782) 1:11:53.988 ******* 2026-03-25 06:19:38.602436 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.602446 | orchestrator | 2026-03-25 06:19:38.602457 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-25 06:19:38.602468 | orchestrator | Wednesday 25 March 2026 06:19:37 +0000 (0:00:00.782) 1:11:54.770 ******* 2026-03-25 06:19:38.602479 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:19:38.602489 | orchestrator | 2026-03-25 06:19:38.602500 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-25 06:19:38.602522 | orchestrator | Wednesday 25 March 2026 06:19:38 +0000 (0:00:00.836) 1:11:55.606 ******* 2026-03-25 06:20:45.446243 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:20:45.446362 | orchestrator | 2026-03-25 06:20:45.446378 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-25 06:20:45.446391 | orchestrator | Wednesday 25 March 2026 06:19:39 +0000 (0:00:00.815) 1:11:56.422 ******* 2026-03-25 06:20:45.446402 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:20:45.446413 | orchestrator | 2026-03-25 06:20:45.446424 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-25 06:20:45.446435 | orchestrator | Wednesday 25 March 2026 06:19:40 +0000 (0:00:00.802) 1:11:57.224 ******* 2026-03-25 06:20:45.446446 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:20:45.446458 | orchestrator | 2026-03-25 06:20:45.446469 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-25 06:20:45.446480 | orchestrator | Wednesday 25 March 2026 06:19:41 +0000 (0:00:00.963) 1:11:58.188 ******* 2026-03-25 06:20:45.446490 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 06:20:45.446501 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 06:20:45.446512 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 06:20:45.446523 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:20:45.446534 | orchestrator | 2026-03-25 06:20:45.446544 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-25 06:20:45.446578 | orchestrator | Wednesday 25 March 2026 06:19:42 +0000 (0:00:01.090) 1:11:59.279 ******* 2026-03-25 06:20:45.446589 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 06:20:45.446600 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 06:20:45.446610 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 06:20:45.446621 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:20:45.446632 | orchestrator | 2026-03-25 06:20:45.446642 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-25 06:20:45.446653 | orchestrator | Wednesday 25 March 2026 06:19:43 +0000 (0:00:01.064) 1:12:00.343 ******* 2026-03-25 06:20:45.446664 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-25 06:20:45.446674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-25 06:20:45.446685 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-25 06:20:45.446695 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:20:45.446706 | orchestrator | 2026-03-25 06:20:45.446717 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-25 06:20:45.446727 | orchestrator | Wednesday 25 March 2026 06:19:44 +0000 (0:00:01.114) 1:12:01.458 ******* 2026-03-25 06:20:45.446762 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:20:45.446774 | orchestrator | 2026-03-25 06:20:45.446785 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-25 06:20:45.446796 | orchestrator | Wednesday 25 March 2026 06:19:45 +0000 (0:00:00.846) 1:12:02.304 ******* 2026-03-25 06:20:45.446807 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-25 06:20:45.446818 | orchestrator | 2026-03-25 06:20:45.446829 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-25 06:20:45.446840 | orchestrator | Wednesday 25 March 2026 06:19:46 +0000 (0:00:01.049) 1:12:03.354 ******* 2026-03-25 06:20:45.446850 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:20:45.446861 | orchestrator | 2026-03-25 06:20:45.446872 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-25 06:20:45.446882 | orchestrator | Wednesday 25 March 2026 06:19:47 +0000 (0:00:01.433) 1:12:04.788 ******* 2026-03-25 06:20:45.446893 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-03-25 06:20:45.446904 | orchestrator | 2026-03-25 06:20:45.446915 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-25 06:20:45.446925 | orchestrator | Wednesday 25 March 2026 06:19:48 +0000 (0:00:01.198) 1:12:05.986 ******* 2026-03-25 06:20:45.446936 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:20:45.446947 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-25 06:20:45.446958 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 06:20:45.446968 | orchestrator | 2026-03-25 06:20:45.446979 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-25 06:20:45.446990 | orchestrator | Wednesday 25 March 2026 06:19:52 +0000 (0:00:03.302) 1:12:09.289 ******* 2026-03-25 06:20:45.447001 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-25 06:20:45.447012 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-25 06:20:45.447022 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:20:45.447033 | orchestrator | 2026-03-25 06:20:45.447044 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-25 06:20:45.447055 | orchestrator | Wednesday 25 March 2026 06:19:54 +0000 (0:00:01.990) 1:12:11.279 ******* 2026-03-25 06:20:45.447080 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:20:45.447091 | orchestrator | 2026-03-25 06:20:45.447102 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-25 06:20:45.447113 | orchestrator | Wednesday 25 March 2026 06:19:55 +0000 (0:00:00.815) 1:12:12.095 ******* 2026-03-25 06:20:45.447124 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-03-25 06:20:45.447144 | orchestrator | 2026-03-25 06:20:45.447155 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-25 06:20:45.447166 | orchestrator | Wednesday 25 March 2026 06:19:56 +0000 (0:00:01.337) 1:12:13.432 ******* 2026-03-25 06:20:45.447178 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:20:45.447190 | orchestrator | 2026-03-25 06:20:45.447202 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-25 06:20:45.447213 | orchestrator | Wednesday 25 March 2026 06:19:58 +0000 (0:00:01.681) 1:12:15.114 ******* 2026-03-25 06:20:45.447241 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:20:45.447253 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-25 06:20:45.447264 | orchestrator | 2026-03-25 06:20:45.447274 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-25 06:20:45.447285 | orchestrator | Wednesday 25 March 2026 06:20:03 +0000 (0:00:05.203) 1:12:20.318 ******* 2026-03-25 06:20:45.447296 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-25 06:20:45.447307 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-25 06:20:45.447317 | orchestrator | 2026-03-25 06:20:45.447328 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-25 06:20:45.447338 | orchestrator | Wednesday 25 March 2026 06:20:06 +0000 (0:00:03.269) 1:12:23.587 ******* 2026-03-25 06:20:45.447349 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-25 06:20:45.447360 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:20:45.447371 | orchestrator | 2026-03-25 06:20:45.447381 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-25 06:20:45.447392 | orchestrator | Wednesday 25 March 2026 06:20:08 +0000 (0:00:01.626) 1:12:25.214 ******* 2026-03-25 06:20:45.447403 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-03-25 06:20:45.447414 | orchestrator | 2026-03-25 06:20:45.447424 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-25 06:20:45.447435 | orchestrator | Wednesday 25 March 2026 06:20:09 +0000 (0:00:01.161) 1:12:26.375 ******* 2026-03-25 06:20:45.447446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447501 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:20:45.447511 | orchestrator | 2026-03-25 06:20:45.447522 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-25 06:20:45.447533 | orchestrator | Wednesday 25 March 2026 06:20:10 +0000 (0:00:01.620) 1:12:27.996 ******* 2026-03-25 06:20:45.447544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-25 06:20:45.447605 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:20:45.447616 | orchestrator | 2026-03-25 06:20:45.447626 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-25 06:20:45.447637 | orchestrator | Wednesday 25 March 2026 06:20:13 +0000 (0:00:02.047) 1:12:30.043 ******* 2026-03-25 06:20:45.447648 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:20:45.447659 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:20:45.447675 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:20:45.447686 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:20:45.447697 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-25 06:20:45.447708 | orchestrator | 2026-03-25 06:20:45.447719 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-25 06:20:45.447730 | orchestrator | Wednesday 25 March 2026 06:20:44 +0000 (0:00:31.665) 1:13:01.709 ******* 2026-03-25 06:20:45.447772 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:20:45.447784 | orchestrator | 2026-03-25 06:20:45.447795 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-25 06:20:45.447812 | orchestrator | Wednesday 25 March 2026 06:20:45 +0000 (0:00:00.740) 1:13:02.450 ******* 2026-03-25 06:21:39.906263 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:21:39.906365 | orchestrator | 2026-03-25 06:21:39.906379 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-25 06:21:39.906389 | orchestrator | Wednesday 25 March 2026 06:20:46 +0000 (0:00:00.778) 1:13:03.229 ******* 2026-03-25 06:21:39.906398 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-03-25 06:21:39.906407 | orchestrator | 2026-03-25 06:21:39.906415 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-25 06:21:39.906423 | orchestrator | Wednesday 25 March 2026 06:20:47 +0000 (0:00:01.336) 1:13:04.566 ******* 2026-03-25 06:21:39.906431 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-03-25 06:21:39.906439 | orchestrator | 2026-03-25 06:21:39.906447 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-25 06:21:39.906455 | orchestrator | Wednesday 25 March 2026 06:20:48 +0000 (0:00:01.147) 1:13:05.713 ******* 2026-03-25 06:21:39.906463 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.906472 | orchestrator | 2026-03-25 06:21:39.906480 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-25 06:21:39.906488 | orchestrator | Wednesday 25 March 2026 06:20:50 +0000 (0:00:02.034) 1:13:07.748 ******* 2026-03-25 06:21:39.906496 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.906503 | orchestrator | 2026-03-25 06:21:39.906511 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-25 06:21:39.906519 | orchestrator | Wednesday 25 March 2026 06:20:52 +0000 (0:00:01.921) 1:13:09.670 ******* 2026-03-25 06:21:39.906527 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.906535 | orchestrator | 2026-03-25 06:21:39.906543 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-25 06:21:39.906551 | orchestrator | Wednesday 25 March 2026 06:20:54 +0000 (0:00:02.184) 1:13:11.855 ******* 2026-03-25 06:21:39.906579 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-25 06:21:39.906588 | orchestrator | 2026-03-25 06:21:39.906596 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-03-25 06:21:39.906605 | orchestrator | skipping: no hosts matched 2026-03-25 06:21:39.906613 | orchestrator | 2026-03-25 06:21:39.906621 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-03-25 06:21:39.906629 | orchestrator | skipping: no hosts matched 2026-03-25 06:21:39.906637 | orchestrator | 2026-03-25 06:21:39.906645 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-03-25 06:21:39.906653 | orchestrator | skipping: no hosts matched 2026-03-25 06:21:39.906660 | orchestrator | 2026-03-25 06:21:39.906668 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-03-25 06:21:39.906676 | orchestrator | 2026-03-25 06:21:39.906736 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-03-25 06:21:39.906744 | orchestrator | Wednesday 25 March 2026 06:20:58 +0000 (0:00:04.158) 1:13:16.013 ******* 2026-03-25 06:21:39.906752 | orchestrator | changed: [testbed-node-0] 2026-03-25 06:21:39.906760 | orchestrator | changed: [testbed-node-1] 2026-03-25 06:21:39.906768 | orchestrator | changed: [testbed-node-2] 2026-03-25 06:21:39.906775 | orchestrator | changed: [testbed-node-3] 2026-03-25 06:21:39.906783 | orchestrator | changed: [testbed-node-4] 2026-03-25 06:21:39.906791 | orchestrator | changed: [testbed-node-5] 2026-03-25 06:21:39.906799 | orchestrator | 2026-03-25 06:21:39.906807 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-03-25 06:21:39.906816 | orchestrator | Wednesday 25 March 2026 06:21:01 +0000 (0:00:02.966) 1:13:18.979 ******* 2026-03-25 06:21:39.906825 | orchestrator | changed: [testbed-node-0] 2026-03-25 06:21:39.906834 | orchestrator | changed: [testbed-node-3] 2026-03-25 06:21:39.906842 | orchestrator | changed: [testbed-node-1] 2026-03-25 06:21:39.906851 | orchestrator | changed: [testbed-node-4] 2026-03-25 06:21:39.906860 | orchestrator | changed: [testbed-node-5] 2026-03-25 06:21:39.906869 | orchestrator | changed: [testbed-node-2] 2026-03-25 06:21:39.906877 | orchestrator | 2026-03-25 06:21:39.906886 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 06:21:39.906895 | orchestrator | Wednesday 25 March 2026 06:21:06 +0000 (0:00:04.273) 1:13:23.252 ******* 2026-03-25 06:21:39.906904 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:21:39.906913 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:21:39.906922 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:21:39.906931 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:21:39.906940 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:21:39.906949 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.906958 | orchestrator | 2026-03-25 06:21:39.906967 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 06:21:39.906976 | orchestrator | Wednesday 25 March 2026 06:21:08 +0000 (0:00:02.269) 1:13:25.522 ******* 2026-03-25 06:21:39.906985 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:21:39.906994 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:21:39.907003 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:21:39.907011 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:21:39.907033 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:21:39.907042 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.907051 | orchestrator | 2026-03-25 06:21:39.907061 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-25 06:21:39.907069 | orchestrator | Wednesday 25 March 2026 06:21:10 +0000 (0:00:02.383) 1:13:27.906 ******* 2026-03-25 06:21:39.907079 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 06:21:39.907090 | orchestrator | 2026-03-25 06:21:39.907099 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-25 06:21:39.907115 | orchestrator | Wednesday 25 March 2026 06:21:13 +0000 (0:00:02.302) 1:13:30.208 ******* 2026-03-25 06:21:39.907125 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 06:21:39.907134 | orchestrator | 2026-03-25 06:21:39.907157 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-25 06:21:39.907167 | orchestrator | Wednesday 25 March 2026 06:21:15 +0000 (0:00:02.326) 1:13:32.535 ******* 2026-03-25 06:21:39.907177 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:21:39.907186 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:21:39.907195 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:21:39.907203 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:21:39.907210 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:21:39.907218 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:21:39.907226 | orchestrator | 2026-03-25 06:21:39.907234 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-25 06:21:39.907242 | orchestrator | Wednesday 25 March 2026 06:21:17 +0000 (0:00:02.057) 1:13:34.592 ******* 2026-03-25 06:21:39.907250 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:21:39.907257 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:21:39.907265 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:21:39.907273 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:21:39.907281 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:21:39.907289 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.907296 | orchestrator | 2026-03-25 06:21:39.907304 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-25 06:21:39.907312 | orchestrator | Wednesday 25 March 2026 06:21:20 +0000 (0:00:02.751) 1:13:37.344 ******* 2026-03-25 06:21:39.907320 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:21:39.907328 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:21:39.907336 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:21:39.907343 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:21:39.907351 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:21:39.907359 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.907367 | orchestrator | 2026-03-25 06:21:39.907375 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-25 06:21:39.907382 | orchestrator | Wednesday 25 March 2026 06:21:22 +0000 (0:00:02.205) 1:13:39.549 ******* 2026-03-25 06:21:39.907390 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:21:39.907398 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:21:39.907406 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:21:39.907414 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:21:39.907422 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:21:39.907430 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.907438 | orchestrator | 2026-03-25 06:21:39.907446 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-25 06:21:39.907453 | orchestrator | Wednesday 25 March 2026 06:21:24 +0000 (0:00:02.250) 1:13:41.799 ******* 2026-03-25 06:21:39.907461 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:21:39.907469 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:21:39.907477 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:21:39.907485 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:21:39.907493 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:21:39.907501 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:21:39.907518 | orchestrator | 2026-03-25 06:21:39.907527 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-25 06:21:39.907534 | orchestrator | Wednesday 25 March 2026 06:21:26 +0000 (0:00:02.071) 1:13:43.870 ******* 2026-03-25 06:21:39.907542 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:21:39.907550 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:21:39.907558 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:21:39.907566 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:21:39.907573 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:21:39.907587 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:21:39.907595 | orchestrator | 2026-03-25 06:21:39.907603 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-25 06:21:39.907611 | orchestrator | Wednesday 25 March 2026 06:21:28 +0000 (0:00:01.781) 1:13:45.652 ******* 2026-03-25 06:21:39.907618 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:21:39.907626 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:21:39.907634 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:21:39.907642 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:21:39.907649 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:21:39.907657 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:21:39.907665 | orchestrator | 2026-03-25 06:21:39.907673 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-25 06:21:39.907696 | orchestrator | Wednesday 25 March 2026 06:21:31 +0000 (0:00:02.390) 1:13:48.042 ******* 2026-03-25 06:21:39.907704 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:21:39.907712 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:21:39.907720 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:21:39.907727 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:21:39.907735 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:21:39.907743 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.907751 | orchestrator | 2026-03-25 06:21:39.907758 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-25 06:21:39.907766 | orchestrator | Wednesday 25 March 2026 06:21:33 +0000 (0:00:02.246) 1:13:50.289 ******* 2026-03-25 06:21:39.907774 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:21:39.907782 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:21:39.907790 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:21:39.907797 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:21:39.907805 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:21:39.907817 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:21:39.907825 | orchestrator | 2026-03-25 06:21:39.907833 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-25 06:21:39.907841 | orchestrator | Wednesday 25 March 2026 06:21:36 +0000 (0:00:02.777) 1:13:53.066 ******* 2026-03-25 06:21:39.907849 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:21:39.907857 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:21:39.907864 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:21:39.907872 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:21:39.907880 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:21:39.907888 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:21:39.907896 | orchestrator | 2026-03-25 06:21:39.907903 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-25 06:21:39.907911 | orchestrator | Wednesday 25 March 2026 06:21:37 +0000 (0:00:01.727) 1:13:54.793 ******* 2026-03-25 06:21:39.907919 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:21:39.907927 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:21:39.907935 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:21:39.907942 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:21:39.907950 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:21:39.907958 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:21:39.907966 | orchestrator | 2026-03-25 06:21:39.907978 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-25 06:22:35.578831 | orchestrator | Wednesday 25 March 2026 06:21:39 +0000 (0:00:02.113) 1:13:56.907 ******* 2026-03-25 06:22:35.578950 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.578969 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.578980 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.578991 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:22:35.579003 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:22:35.579013 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:22:35.579023 | orchestrator | 2026-03-25 06:22:35.579034 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-25 06:22:35.579045 | orchestrator | Wednesday 25 March 2026 06:21:41 +0000 (0:00:01.890) 1:13:58.797 ******* 2026-03-25 06:22:35.579079 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.579090 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.579102 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.579112 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:22:35.579122 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:22:35.579129 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:22:35.579135 | orchestrator | 2026-03-25 06:22:35.579141 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-25 06:22:35.579148 | orchestrator | Wednesday 25 March 2026 06:21:43 +0000 (0:00:02.160) 1:14:00.957 ******* 2026-03-25 06:22:35.579154 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.579160 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.579166 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.579173 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:22:35.579179 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:22:35.579185 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:22:35.579191 | orchestrator | 2026-03-25 06:22:35.579197 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-25 06:22:35.579204 | orchestrator | Wednesday 25 March 2026 06:21:45 +0000 (0:00:02.027) 1:14:02.985 ******* 2026-03-25 06:22:35.579210 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.579216 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.579222 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.579228 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:22:35.579234 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:22:35.579240 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:22:35.579246 | orchestrator | 2026-03-25 06:22:35.579252 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-25 06:22:35.579259 | orchestrator | Wednesday 25 March 2026 06:21:47 +0000 (0:00:01.857) 1:14:04.842 ******* 2026-03-25 06:22:35.579265 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.579271 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.579277 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.579283 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:22:35.579289 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:22:35.579295 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:22:35.579301 | orchestrator | 2026-03-25 06:22:35.579307 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-25 06:22:35.579313 | orchestrator | Wednesday 25 March 2026 06:21:49 +0000 (0:00:01.890) 1:14:06.733 ******* 2026-03-25 06:22:35.579320 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579326 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:22:35.579332 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:22:35.579338 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:22:35.579344 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:22:35.579350 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:22:35.579356 | orchestrator | 2026-03-25 06:22:35.579363 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-25 06:22:35.579371 | orchestrator | Wednesday 25 March 2026 06:21:51 +0000 (0:00:01.790) 1:14:08.524 ******* 2026-03-25 06:22:35.579378 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579385 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:22:35.579391 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:22:35.579399 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:22:35.579406 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:22:35.579412 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:22:35.579419 | orchestrator | 2026-03-25 06:22:35.579426 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-25 06:22:35.579433 | orchestrator | Wednesday 25 March 2026 06:21:53 +0000 (0:00:02.131) 1:14:10.655 ******* 2026-03-25 06:22:35.579440 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579446 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:22:35.579453 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:22:35.579460 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:22:35.579473 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:22:35.579480 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:22:35.579486 | orchestrator | 2026-03-25 06:22:35.579494 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-25 06:22:35.579500 | orchestrator | Wednesday 25 March 2026 06:21:55 +0000 (0:00:02.236) 1:14:12.892 ******* 2026-03-25 06:22:35.579507 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579514 | orchestrator | 2026-03-25 06:22:35.579521 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-25 06:22:35.579540 | orchestrator | Wednesday 25 March 2026 06:21:59 +0000 (0:00:03.175) 1:14:16.067 ******* 2026-03-25 06:22:35.579547 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579554 | orchestrator | 2026-03-25 06:22:35.579561 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-25 06:22:35.579568 | orchestrator | Wednesday 25 March 2026 06:22:02 +0000 (0:00:03.004) 1:14:19.072 ******* 2026-03-25 06:22:35.579575 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579582 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:22:35.579589 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:22:35.579596 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:22:35.579602 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:22:35.579609 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:22:35.579617 | orchestrator | 2026-03-25 06:22:35.579623 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-25 06:22:35.579664 | orchestrator | Wednesday 25 March 2026 06:22:04 +0000 (0:00:02.592) 1:14:21.665 ******* 2026-03-25 06:22:35.579676 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579686 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:22:35.579697 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:22:35.579704 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:22:35.579711 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:22:35.579718 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:22:35.579726 | orchestrator | 2026-03-25 06:22:35.579732 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-25 06:22:35.579754 | orchestrator | Wednesday 25 March 2026 06:22:07 +0000 (0:00:02.499) 1:14:24.164 ******* 2026-03-25 06:22:35.579762 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-25 06:22:35.579770 | orchestrator | 2026-03-25 06:22:35.579776 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-25 06:22:35.579782 | orchestrator | Wednesday 25 March 2026 06:22:09 +0000 (0:00:02.568) 1:14:26.733 ******* 2026-03-25 06:22:35.579789 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579795 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:22:35.579800 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:22:35.579807 | orchestrator | ok: [testbed-node-3] 2026-03-25 06:22:35.579813 | orchestrator | ok: [testbed-node-4] 2026-03-25 06:22:35.579819 | orchestrator | ok: [testbed-node-5] 2026-03-25 06:22:35.579825 | orchestrator | 2026-03-25 06:22:35.579831 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-25 06:22:35.579837 | orchestrator | Wednesday 25 March 2026 06:22:12 +0000 (0:00:02.526) 1:14:29.259 ******* 2026-03-25 06:22:35.579843 | orchestrator | changed: [testbed-node-3] 2026-03-25 06:22:35.579849 | orchestrator | changed: [testbed-node-0] 2026-03-25 06:22:35.579855 | orchestrator | changed: [testbed-node-1] 2026-03-25 06:22:35.579861 | orchestrator | changed: [testbed-node-2] 2026-03-25 06:22:35.579867 | orchestrator | changed: [testbed-node-4] 2026-03-25 06:22:35.579873 | orchestrator | changed: [testbed-node-5] 2026-03-25 06:22:35.579879 | orchestrator | 2026-03-25 06:22:35.579885 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-03-25 06:22:35.579891 | orchestrator | 2026-03-25 06:22:35.579898 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 06:22:35.579904 | orchestrator | Wednesday 25 March 2026 06:22:16 +0000 (0:00:04.484) 1:14:33.744 ******* 2026-03-25 06:22:35.579910 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579921 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:22:35.579927 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:22:35.579933 | orchestrator | 2026-03-25 06:22:35.579940 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 06:22:35.579946 | orchestrator | Wednesday 25 March 2026 06:22:18 +0000 (0:00:02.076) 1:14:35.821 ******* 2026-03-25 06:22:35.579952 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579958 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:22:35.579964 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:22:35.579970 | orchestrator | 2026-03-25 06:22:35.579976 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-25 06:22:35.579983 | orchestrator | Wednesday 25 March 2026 06:22:20 +0000 (0:00:01.432) 1:14:37.254 ******* 2026-03-25 06:22:35.579989 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:22:35.579995 | orchestrator | 2026-03-25 06:22:35.580001 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-25 06:22:35.580007 | orchestrator | Wednesday 25 March 2026 06:22:22 +0000 (0:00:02.435) 1:14:39.689 ******* 2026-03-25 06:22:35.580014 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.580020 | orchestrator | 2026-03-25 06:22:35.580026 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-03-25 06:22:35.580032 | orchestrator | 2026-03-25 06:22:35.580038 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-03-25 06:22:35.580044 | orchestrator | Wednesday 25 March 2026 06:22:25 +0000 (0:00:02.419) 1:14:42.109 ******* 2026-03-25 06:22:35.580050 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.580056 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.580062 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.580068 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:22:35.580074 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:22:35.580080 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:22:35.580086 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:22:35.580092 | orchestrator | 2026-03-25 06:22:35.580099 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 06:22:35.580105 | orchestrator | Wednesday 25 March 2026 06:22:27 +0000 (0:00:02.145) 1:14:44.255 ******* 2026-03-25 06:22:35.580111 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.580117 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.580123 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.580129 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:22:35.580135 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:22:35.580141 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:22:35.580147 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:22:35.580153 | orchestrator | 2026-03-25 06:22:35.580159 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-25 06:22:35.580165 | orchestrator | Wednesday 25 March 2026 06:22:29 +0000 (0:00:02.529) 1:14:46.785 ******* 2026-03-25 06:22:35.580171 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.580177 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.580183 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.580193 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:22:35.580200 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:22:35.580205 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:22:35.580211 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:22:35.580217 | orchestrator | 2026-03-25 06:22:35.580224 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-25 06:22:35.580230 | orchestrator | Wednesday 25 March 2026 06:22:32 +0000 (0:00:02.763) 1:14:49.548 ******* 2026-03-25 06:22:35.580236 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.580242 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.580248 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.580254 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:22:35.580260 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:22:35.580270 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:22:35.580276 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:22:35.580282 | orchestrator | 2026-03-25 06:22:35.580288 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-03-25 06:22:35.580294 | orchestrator | Wednesday 25 March 2026 06:22:34 +0000 (0:00:02.440) 1:14:51.989 ******* 2026-03-25 06:22:35.580300 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:22:35.580306 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:22:35.580313 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:22:35.580322 | orchestrator | skipping: [testbed-node-3] 2026-03-25 06:23:24.529574 | orchestrator | skipping: [testbed-node-4] 2026-03-25 06:23:24.529727 | orchestrator | skipping: [testbed-node-5] 2026-03-25 06:23:24.529741 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.529751 | orchestrator | 2026-03-25 06:23:24.529762 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-03-25 06:23:24.529773 | orchestrator | 2026-03-25 06:23:24.529783 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-03-25 06:23:24.529793 | orchestrator | Wednesday 25 March 2026 06:22:37 +0000 (0:00:02.998) 1:14:54.987 ******* 2026-03-25 06:23:24.529804 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-03-25 06:23:24.529814 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-03-25 06:23:24.529823 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-03-25 06:23:24.529833 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.529842 | orchestrator | 2026-03-25 06:23:24.529852 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-25 06:23:24.529862 | orchestrator | Wednesday 25 March 2026 06:22:39 +0000 (0:00:01.182) 1:14:56.170 ******* 2026-03-25 06:23:24.529871 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.529881 | orchestrator | 2026-03-25 06:23:24.529890 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-25 06:23:24.529900 | orchestrator | Wednesday 25 March 2026 06:22:40 +0000 (0:00:01.171) 1:14:57.341 ******* 2026-03-25 06:23:24.529910 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.529919 | orchestrator | 2026-03-25 06:23:24.529929 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-25 06:23:24.529938 | orchestrator | Wednesday 25 March 2026 06:22:41 +0000 (0:00:01.117) 1:14:58.459 ******* 2026-03-25 06:23:24.529948 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.529957 | orchestrator | 2026-03-25 06:23:24.529967 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-25 06:23:24.529976 | orchestrator | Wednesday 25 March 2026 06:22:42 +0000 (0:00:01.136) 1:14:59.596 ******* 2026-03-25 06:23:24.529986 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.529995 | orchestrator | 2026-03-25 06:23:24.530005 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-03-25 06:23:24.530083 | orchestrator | Wednesday 25 March 2026 06:22:43 +0000 (0:00:01.352) 1:15:00.948 ******* 2026-03-25 06:23:24.530097 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-03-25 06:23:24.530108 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-03-25 06:23:24.530119 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.530130 | orchestrator | 2026-03-25 06:23:24.530141 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-03-25 06:23:24.530151 | orchestrator | Wednesday 25 March 2026 06:22:45 +0000 (0:00:01.181) 1:15:02.130 ******* 2026-03-25 06:23:24.530162 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.530172 | orchestrator | 2026-03-25 06:23:24.530183 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-03-25 06:23:24.530194 | orchestrator | Wednesday 25 March 2026 06:22:46 +0000 (0:00:01.130) 1:15:03.260 ******* 2026-03-25 06:23:24.530205 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.530216 | orchestrator | 2026-03-25 06:23:24.530227 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-03-25 06:23:24.530259 | orchestrator | Wednesday 25 March 2026 06:22:47 +0000 (0:00:01.141) 1:15:04.402 ******* 2026-03-25 06:23:24.530271 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.530283 | orchestrator | 2026-03-25 06:23:24.530294 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-03-25 06:23:24.530305 | orchestrator | Wednesday 25 March 2026 06:22:48 +0000 (0:00:01.169) 1:15:05.571 ******* 2026-03-25 06:23:24.530316 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-03-25 06:23:24.530327 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-03-25 06:23:24.530338 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.530349 | orchestrator | 2026-03-25 06:23:24.530359 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-03-25 06:23:24.530369 | orchestrator | Wednesday 25 March 2026 06:22:49 +0000 (0:00:01.198) 1:15:06.770 ******* 2026-03-25 06:23:24.530380 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.530391 | orchestrator | 2026-03-25 06:23:24.530402 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-03-25 06:23:24.530413 | orchestrator | Wednesday 25 March 2026 06:22:50 +0000 (0:00:01.146) 1:15:07.917 ******* 2026-03-25 06:23:24.530423 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.530432 | orchestrator | 2026-03-25 06:23:24.530441 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-03-25 06:23:24.530463 | orchestrator | Wednesday 25 March 2026 06:22:52 +0000 (0:00:01.126) 1:15:09.044 ******* 2026-03-25 06:23:24.530473 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.530483 | orchestrator | 2026-03-25 06:23:24.530492 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-03-25 06:23:24.530501 | orchestrator | Wednesday 25 March 2026 06:22:53 +0000 (0:00:01.156) 1:15:10.200 ******* 2026-03-25 06:23:24.530511 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:24.530520 | orchestrator | 2026-03-25 06:23:24.530530 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-03-25 06:23:24.530539 | orchestrator | 2026-03-25 06:23:24.530549 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-25 06:23:24.530558 | orchestrator | Wednesday 25 March 2026 06:22:55 +0000 (0:00:01.979) 1:15:12.180 ******* 2026-03-25 06:23:24.530567 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:23:24.530577 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:23:24.530618 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:23:24.530628 | orchestrator | 2026-03-25 06:23:24.530638 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-25 06:23:24.530647 | orchestrator | Wednesday 25 March 2026 06:22:56 +0000 (0:00:01.395) 1:15:13.576 ******* 2026-03-25 06:23:24.530657 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:23:24.530667 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:23:24.530692 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:23:24.530702 | orchestrator | 2026-03-25 06:23:24.530712 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-25 06:23:24.530721 | orchestrator | Wednesday 25 March 2026 06:22:58 +0000 (0:00:01.488) 1:15:15.065 ******* 2026-03-25 06:23:24.530731 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:23:24.530740 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:23:24.530750 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:23:24.530759 | orchestrator | 2026-03-25 06:23:24.530769 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-25 06:23:24.530779 | orchestrator | Wednesday 25 March 2026 06:22:59 +0000 (0:00:01.379) 1:15:16.444 ******* 2026-03-25 06:23:24.530788 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:23:24.530798 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:23:24.530807 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:23:24.530816 | orchestrator | 2026-03-25 06:23:24.530826 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-25 06:23:24.530843 | orchestrator | Wednesday 25 March 2026 06:23:00 +0000 (0:00:01.335) 1:15:17.780 ******* 2026-03-25 06:23:24.530853 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:23:24.530863 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:23:24.530872 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:23:24.530882 | orchestrator | 2026-03-25 06:23:24.530891 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-03-25 06:23:24.530901 | orchestrator | Wednesday 25 March 2026 06:23:02 +0000 (0:00:01.400) 1:15:19.181 ******* 2026-03-25 06:23:24.530910 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:23:24.530920 | orchestrator | skipping: [testbed-node-1] 2026-03-25 06:23:24.530929 | orchestrator | skipping: [testbed-node-2] 2026-03-25 06:23:24.530938 | orchestrator | 2026-03-25 06:23:24.530948 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-03-25 06:23:24.530957 | orchestrator | Wednesday 25 March 2026 06:23:03 +0000 (0:00:01.711) 1:15:20.892 ******* 2026-03-25 06:23:24.530966 | orchestrator | skipping: [testbed-node-0] 2026-03-25 06:23:24.530976 | orchestrator | 2026-03-25 06:23:24.530985 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-03-25 06:23:24.530995 | orchestrator | 2026-03-25 06:23:24.531004 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-25 06:23:24.531014 | orchestrator | Wednesday 25 March 2026 06:23:05 +0000 (0:00:01.549) 1:15:22.442 ******* 2026-03-25 06:23:24.531023 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:23:24.531033 | orchestrator | 2026-03-25 06:23:24.531043 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-25 06:23:24.531052 | orchestrator | Wednesday 25 March 2026 06:23:06 +0000 (0:00:01.428) 1:15:23.871 ******* 2026-03-25 06:23:24.531061 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:23:24.531071 | orchestrator | 2026-03-25 06:23:24.531080 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-03-25 06:23:24.531090 | orchestrator | Wednesday 25 March 2026 06:23:08 +0000 (0:00:01.150) 1:15:25.021 ******* 2026-03-25 06:23:24.531099 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:23:24.531109 | orchestrator | 2026-03-25 06:23:24.531118 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-03-25 06:23:24.531128 | orchestrator | Wednesday 25 March 2026 06:23:09 +0000 (0:00:01.203) 1:15:26.224 ******* 2026-03-25 06:23:24.531137 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:23:24.531147 | orchestrator | 2026-03-25 06:23:24.531156 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-03-25 06:23:24.531166 | orchestrator | Wednesday 25 March 2026 06:23:12 +0000 (0:00:02.883) 1:15:29.108 ******* 2026-03-25 06:23:24.531175 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:23:24.531185 | orchestrator | 2026-03-25 06:23:24.531194 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-03-25 06:23:24.531204 | orchestrator | Wednesday 25 March 2026 06:23:15 +0000 (0:00:03.369) 1:15:32.478 ******* 2026-03-25 06:23:24.531213 | orchestrator | changed: [testbed-node-0] 2026-03-25 06:23:24.531223 | orchestrator | 2026-03-25 06:23:24.531232 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-03-25 06:23:24.531242 | orchestrator | 2026-03-25 06:23:24.531255 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-03-25 06:23:24.531272 | orchestrator | Wednesday 25 March 2026 06:23:17 +0000 (0:00:01.848) 1:15:34.326 ******* 2026-03-25 06:23:24.531287 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:23:24.531303 | orchestrator | ok: [testbed-node-1] 2026-03-25 06:23:24.531318 | orchestrator | ok: [testbed-node-2] 2026-03-25 06:23:24.531333 | orchestrator | 2026-03-25 06:23:24.531350 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-03-25 06:23:24.531365 | orchestrator | Wednesday 25 March 2026 06:23:18 +0000 (0:00:01.533) 1:15:35.860 ******* 2026-03-25 06:23:24.531381 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:23:24.531397 | orchestrator | 2026-03-25 06:23:24.531413 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-03-25 06:23:24.531449 | orchestrator | Wednesday 25 March 2026 06:23:21 +0000 (0:00:02.245) 1:15:38.106 ******* 2026-03-25 06:23:24.531466 | orchestrator | ok: [testbed-node-0] 2026-03-25 06:23:24.531482 | orchestrator | 2026-03-25 06:23:24.531498 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 06:23:24.531515 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-25 06:23:24.531533 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-03-25 06:23:24.531550 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-03-25 06:23:24.531566 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-03-25 06:23:24.531621 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-03-25 06:23:25.379638 | orchestrator | testbed-node-3 : ok=316  changed=21  unreachable=0 failed=0 skipped=355  rescued=0 ignored=0 2026-03-25 06:23:25.379711 | orchestrator | testbed-node-4 : ok=308  changed=16  unreachable=0 failed=0 skipped=352  rescued=0 ignored=0 2026-03-25 06:23:25.379719 | orchestrator | testbed-node-5 : ok=303  changed=17  unreachable=0 failed=0 skipped=337  rescued=0 ignored=0 2026-03-25 06:23:25.379724 | orchestrator | 2026-03-25 06:23:25.379729 | orchestrator | 2026-03-25 06:23:25.379734 | orchestrator | 2026-03-25 06:23:25.379739 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 06:23:25.379745 | orchestrator | Wednesday 25 March 2026 06:23:24 +0000 (0:00:03.414) 1:15:41.521 ******* 2026-03-25 06:23:25.379750 | orchestrator | =============================================================================== 2026-03-25 06:23:25.379754 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 75.22s 2026-03-25 06:23:25.379759 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 74.42s 2026-03-25 06:23:25.379763 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.70s 2026-03-25 06:23:25.379768 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.67s 2026-03-25 06:23:25.379772 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.13s 2026-03-25 06:23:25.379777 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.99s 2026-03-25 06:23:25.379781 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 29.97s 2026-03-25 06:23:25.379786 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 27.75s 2026-03-25 06:23:25.379790 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.00s 2026-03-25 06:23:25.379795 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.98s 2026-03-25 06:23:25.379799 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 22.46s 2026-03-25 06:23:25.379804 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 17.87s 2026-03-25 06:23:25.379808 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.62s 2026-03-25 06:23:25.379813 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 14.97s 2026-03-25 06:23:25.379817 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.46s 2026-03-25 06:23:25.379822 | orchestrator | Stop ceph osd ---------------------------------------------------------- 12.78s 2026-03-25 06:23:25.379826 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.54s 2026-03-25 06:23:25.379850 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.39s 2026-03-25 06:23:25.379855 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.09s 2026-03-25 06:23:25.379859 | orchestrator | Stop standby ceph mds -------------------------------------------------- 10.70s 2026-03-25 06:23:25.695047 | orchestrator | + osism apply cephclient 2026-03-25 06:23:27.829154 | orchestrator | 2026-03-25 06:23:27 | INFO  | Task 8683e9c0-a5a7-4434-b54d-eea0afd3af5a (cephclient) was prepared for execution. 2026-03-25 06:23:27.829259 | orchestrator | 2026-03-25 06:23:27 | INFO  | It takes a moment until task 8683e9c0-a5a7-4434-b54d-eea0afd3af5a (cephclient) has been started and output is visible here. 2026-03-25 06:23:46.895994 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-25 06:23:46.896108 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-25 06:23:46.896136 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-25 06:23:46.896147 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-25 06:23:46.896187 | orchestrator | 2026-03-25 06:23:46.896198 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-25 06:23:46.896210 | orchestrator | 2026-03-25 06:23:46.896220 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-25 06:23:46.896231 | orchestrator | Wednesday 25 March 2026 06:23:34 +0000 (0:00:02.009) 0:00:02.009 ******* 2026-03-25 06:23:46.896243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-25 06:23:46.896255 | orchestrator | 2026-03-25 06:23:46.896266 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-25 06:23:46.896277 | orchestrator | Wednesday 25 March 2026 06:23:35 +0000 (0:00:00.761) 0:00:02.771 ******* 2026-03-25 06:23:46.896287 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-25 06:23:46.896298 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-25 06:23:46.896310 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-25 06:23:46.896321 | orchestrator | 2026-03-25 06:23:46.896331 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-25 06:23:46.896342 | orchestrator | Wednesday 25 March 2026 06:23:36 +0000 (0:00:01.679) 0:00:04.450 ******* 2026-03-25 06:23:46.896353 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-25 06:23:46.896364 | orchestrator | 2026-03-25 06:23:46.896375 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-25 06:23:46.896385 | orchestrator | Wednesday 25 March 2026 06:23:37 +0000 (0:00:01.044) 0:00:05.495 ******* 2026-03-25 06:23:46.896396 | orchestrator | ok: [testbed-manager] 2026-03-25 06:23:46.896407 | orchestrator | 2026-03-25 06:23:46.896417 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-25 06:23:46.896428 | orchestrator | Wednesday 25 March 2026 06:23:38 +0000 (0:00:00.931) 0:00:06.427 ******* 2026-03-25 06:23:46.896439 | orchestrator | ok: [testbed-manager] 2026-03-25 06:23:46.896449 | orchestrator | 2026-03-25 06:23:46.896460 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-25 06:23:46.896471 | orchestrator | Wednesday 25 March 2026 06:23:39 +0000 (0:00:00.968) 0:00:07.395 ******* 2026-03-25 06:23:46.896482 | orchestrator | ok: [testbed-manager] 2026-03-25 06:23:46.896492 | orchestrator | 2026-03-25 06:23:46.896503 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-25 06:23:46.896513 | orchestrator | Wednesday 25 March 2026 06:23:40 +0000 (0:00:01.110) 0:00:08.506 ******* 2026-03-25 06:23:46.896546 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-25 06:23:46.896559 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-03-25 06:23:46.896600 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-25 06:23:46.896612 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-25 06:23:46.896624 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-25 06:23:46.896636 | orchestrator | 2026-03-25 06:23:46.896648 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-25 06:23:46.896660 | orchestrator | Wednesday 25 March 2026 06:23:44 +0000 (0:00:03.942) 0:00:12.448 ******* 2026-03-25 06:23:46.896672 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-25 06:23:46.896684 | orchestrator | 2026-03-25 06:23:46.896696 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-25 06:23:46.896708 | orchestrator | Wednesday 25 March 2026 06:23:45 +0000 (0:00:00.490) 0:00:12.939 ******* 2026-03-25 06:23:46.896720 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:46.896732 | orchestrator | 2026-03-25 06:23:46.896744 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-25 06:23:46.896756 | orchestrator | Wednesday 25 March 2026 06:23:45 +0000 (0:00:00.154) 0:00:13.093 ******* 2026-03-25 06:23:46.896768 | orchestrator | skipping: [testbed-manager] 2026-03-25 06:23:46.896780 | orchestrator | 2026-03-25 06:23:46.896792 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-25 06:23:46.896804 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-25 06:23:46.896817 | orchestrator | 2026-03-25 06:23:46.896828 | orchestrator | 2026-03-25 06:23:46.896840 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-25 06:23:46.896852 | orchestrator | Wednesday 25 March 2026 06:23:46 +0000 (0:00:01.076) 0:00:14.169 ******* 2026-03-25 06:23:46.896864 | orchestrator | =============================================================================== 2026-03-25 06:23:46.896877 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.94s 2026-03-25 06:23:46.896889 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.68s 2026-03-25 06:23:46.896901 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.11s 2026-03-25 06:23:46.896911 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.08s 2026-03-25 06:23:46.896922 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.04s 2026-03-25 06:23:46.896933 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2026-03-25 06:23:46.896960 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.93s 2026-03-25 06:23:46.896972 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.76s 2026-03-25 06:23:46.896982 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2026-03-25 06:23:46.896993 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-03-25 06:23:47.189919 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-25 06:23:47.189990 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-03-25 06:23:47.195610 | orchestrator | + set -e 2026-03-25 06:23:47.195644 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-25 06:23:47.195651 | orchestrator | ++ export INTERACTIVE=false 2026-03-25 06:23:47.195655 | orchestrator | ++ INTERACTIVE=false 2026-03-25 06:23:47.195660 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-25 06:23:47.195665 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-25 06:23:47.195669 | orchestrator | + source /opt/manager-vars.sh 2026-03-25 06:23:47.195674 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-25 06:23:47.195679 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-25 06:23:47.195683 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-25 06:23:47.195688 | orchestrator | ++ CEPH_VERSION=reef 2026-03-25 06:23:47.195693 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-25 06:23:47.195697 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-25 06:23:47.195716 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-25 06:23:47.195721 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-25 06:23:47.195726 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-25 06:23:47.195730 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-25 06:23:47.195735 | orchestrator | ++ export ARA=false 2026-03-25 06:23:47.195739 | orchestrator | ++ ARA=false 2026-03-25 06:23:47.195744 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-25 06:23:47.195748 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-25 06:23:47.195753 | orchestrator | ++ export TEMPEST=false 2026-03-25 06:23:47.195757 | orchestrator | ++ TEMPEST=false 2026-03-25 06:23:47.195762 | orchestrator | ++ export IS_ZUUL=true 2026-03-25 06:23:47.195766 | orchestrator | ++ IS_ZUUL=true 2026-03-25 06:23:47.195771 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 06:23:47.195775 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.44 2026-03-25 06:23:47.195780 | orchestrator | ++ export EXTERNAL_API=false 2026-03-25 06:23:47.195784 | orchestrator | ++ EXTERNAL_API=false 2026-03-25 06:23:47.195788 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-25 06:23:47.195793 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-25 06:23:47.195797 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-25 06:23:47.195802 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-25 06:23:47.195807 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-25 06:23:47.195811 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-25 06:23:47.195816 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-25 06:23:47.195820 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-25 06:23:47.195824 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-25 06:23:47.196189 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-25 06:23:47.200264 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-25 06:23:47.200333 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-25 06:23:47.200346 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-25 06:23:47.200355 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-03-25 06:24:09.303232 | orchestrator | 2026-03-25 06:24:09 | ERROR  | Unable to get ansible vault password 2026-03-25 06:24:09.303339 | orchestrator | 2026-03-25 06:24:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-25 06:24:09.303353 | orchestrator | 2026-03-25 06:24:09 | ERROR  | Dropping encrypted entries 2026-03-25 06:24:09.339429 | orchestrator | 2026-03-25 06:24:09 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-25 06:24:09.340649 | orchestrator | 2026-03-25 06:24:09 | INFO  | Kolla configuration check passed 2026-03-25 06:24:09.572956 | orchestrator | 2026-03-25 06:24:09 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-03-25 06:24:09.593045 | orchestrator | 2026-03-25 06:24:09 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-03-25 06:24:09.908174 | orchestrator | + osism migrate rabbitmq3to4 list 2026-03-25 06:24:31.458603 | orchestrator | 2026-03-25 06:24:31 | ERROR  | Unable to get ansible vault password 2026-03-25 06:24:31.458747 | orchestrator | 2026-03-25 06:24:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-25 06:24:31.458776 | orchestrator | 2026-03-25 06:24:31 | ERROR  | Dropping encrypted entries 2026-03-25 06:24:31.491352 | orchestrator | 2026-03-25 06:24:31 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-25 06:24:31.622938 | orchestrator | 2026-03-25 06:24:31 | INFO  | Found 205 classic queue(s) in vhost '/': 2026-03-25 06:24:31.623035 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-03-25 06:24:31.623049 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-03-25 06:24:31.623060 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-03-25 06:24:31.623072 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-03-25 06:24:31.623116 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - barbican.workers_fanout_249fbc27bbe64bc3bcbec46d1c8f0ab8 (vhost: /, messages: 0) 2026-03-25 06:24:31.623130 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - barbican.workers_fanout_28d698cac310422a801f0ef93feddb73 (vhost: /, messages: 0) 2026-03-25 06:24:31.623141 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - barbican.workers_fanout_d2b04717ac6643a7a6be361b12765c28 (vhost: /, messages: 0) 2026-03-25 06:24:31.623152 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-03-25 06:24:31.623696 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central (vhost: /, messages: 0) 2026-03-25 06:24:31.623990 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.624021 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.624130 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.625311 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central_fanout_013d8448763f46618985ca22915dd04e (vhost: /, messages: 0) 2026-03-25 06:24:31.625340 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central_fanout_15572042695946b1935da6bc038ccd8e (vhost: /, messages: 0) 2026-03-25 06:24:31.625352 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central_fanout_61a1550b5cdc445886a854c92ad430d0 (vhost: /, messages: 0) 2026-03-25 06:24:31.625362 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central_fanout_798f412bd0264302a63da3094eaf91b1 (vhost: /, messages: 0) 2026-03-25 06:24:31.625374 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central_fanout_d1976e3db5744f869bcf0801d33e34ea (vhost: /, messages: 0) 2026-03-25 06:24:31.625750 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - central_fanout_d69e1276c12b46d0ba2c324eee10315e (vhost: /, messages: 0) 2026-03-25 06:24:31.625773 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-03-25 06:24:31.625785 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.625996 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.626316 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.626618 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-backup_fanout_5384bb2d1cf34bc9a44e043789359b9e (vhost: /, messages: 0) 2026-03-25 06:24:31.629700 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-backup_fanout_81be2039042b43ad95cc1203467d1883 (vhost: /, messages: 0) 2026-03-25 06:24:31.629739 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-03-25 06:24:31.629750 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.629762 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.629773 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.629784 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-scheduler_fanout_7336026ae6364db39e3c9beaeee6c704 (vhost: /, messages: 0) 2026-03-25 06:24:31.629796 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-scheduler_fanout_b2768d0ef7a04d97b625f2a8b162a8f1 (vhost: /, messages: 0) 2026-03-25 06:24:31.629829 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-03-25 06:24:31.629841 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-03-25 06:24:31.629852 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.629863 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_67a51463e6874eb797ac3a8655c7a3f7 (vhost: /, messages: 0) 2026-03-25 06:24:31.629875 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-03-25 06:24:31.629886 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.629897 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_f40a09d8e45848518fd2949dd1d8dd82 (vhost: /, messages: 0) 2026-03-25 06:24:31.629908 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-03-25 06:24:31.629919 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.629929 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_633355ea47a647899956100315635690 (vhost: /, messages: 0) 2026-03-25 06:24:31.629941 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume_fanout_85024af69bd948e996b62f70ffc21acc (vhost: /, messages: 0) 2026-03-25 06:24:31.629951 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume_fanout_8aba888bdec9428f81fce1c3facfc5e9 (vhost: /, messages: 0) 2026-03-25 06:24:31.629972 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - cinder-volume_fanout_fd54d9241d254807a076ff1d414a3c38 (vhost: /, messages: 0) 2026-03-25 06:24:31.629983 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - compute (vhost: /, messages: 0) 2026-03-25 06:24:31.630272 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-03-25 06:24:31.630295 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-03-25 06:24:31.630483 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-03-25 06:24:31.630504 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - compute_fanout_6aea04a7e951457fa7dcd786146041c5 (vhost: /, messages: 0) 2026-03-25 06:24:31.630680 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - compute_fanout_9de04f4d229e44638ad18ded19d92387 (vhost: /, messages: 0) 2026-03-25 06:24:31.630700 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - compute_fanout_b33d1590fa494af28afce33e3481581f (vhost: /, messages: 0) 2026-03-25 06:24:31.631049 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor (vhost: /, messages: 0) 2026-03-25 06:24:31.631070 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.631167 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.631185 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.631579 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor_fanout_620411599b3b434e963b3b49cafafc97 (vhost: /, messages: 0) 2026-03-25 06:24:31.631942 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor_fanout_6b20d6f61e3947a093c8a2118a56fd12 (vhost: /, messages: 0) 2026-03-25 06:24:31.631978 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor_fanout_cc537640b0ad4dd8b8376b2f7ab56f9a (vhost: /, messages: 0) 2026-03-25 06:24:31.631989 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor_fanout_cf5f2063378c43cc8383f00c8c37d340 (vhost: /, messages: 0) 2026-03-25 06:24:31.632118 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor_fanout_f170be6f19bb403086e8d7fec445061f (vhost: /, messages: 0) 2026-03-25 06:24:31.634055 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - conductor_fanout_f48e5928f9bf47c79a11feb8d902591f (vhost: /, messages: 0) 2026-03-25 06:24:31.634088 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - event.sample (vhost: /, messages: 10) 2026-03-25 06:24:31.634099 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-03-25 06:24:31.634110 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor.5l2itwnwpjlf (vhost: /, messages: 0) 2026-03-25 06:24:31.634121 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor.7ynutq6b55cv (vhost: /, messages: 0) 2026-03-25 06:24:31.634131 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor.p7bse6l4osfg (vhost: /, messages: 0) 2026-03-25 06:24:31.634218 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor_fanout_052a268efdba43429d0135c63c225c83 (vhost: /, messages: 0) 2026-03-25 06:24:31.634234 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor_fanout_3789bb1bc86042c391c25ccfbd4c42d6 (vhost: /, messages: 0) 2026-03-25 06:24:31.634245 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor_fanout_3e187ed1a3be4fb0839c215153e864c3 (vhost: /, messages: 0) 2026-03-25 06:24:31.634256 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor_fanout_49ffe11408574ddaa05e8a7c1062bb17 (vhost: /, messages: 0) 2026-03-25 06:24:31.634266 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor_fanout_4c14eeaf67d0434ba3bc1f09e36fc3cd (vhost: /, messages: 0) 2026-03-25 06:24:31.634277 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor_fanout_bf3e0d410ec042409cf2dc1846378bf5 (vhost: /, messages: 0) 2026-03-25 06:24:31.634288 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor_fanout_d092fbae24e64f6f8ff5b4be541fb020 (vhost: /, messages: 0) 2026-03-25 06:24:31.634298 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor_fanout_dfdc765354b44253853f93a0ee20176e (vhost: /, messages: 0) 2026-03-25 06:24:31.634309 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - magnum-conductor_fanout_fc6e5bba935d4874847963c93431d223 (vhost: /, messages: 0) 2026-03-25 06:24:31.634326 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-03-25 06:24:31.634346 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.634655 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.634676 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.634909 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-data_fanout_683a6de683fc4d0bb70d31009c2ed769 (vhost: /, messages: 0) 2026-03-25 06:24:31.635057 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-data_fanout_c5a99ba23d954875979d7f62871aa843 (vhost: /, messages: 0) 2026-03-25 06:24:31.635075 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-data_fanout_fc4efea4812c4774bec4c4df183e462f (vhost: /, messages: 0) 2026-03-25 06:24:31.635302 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-03-25 06:24:31.635335 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.635671 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.635699 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.635963 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-scheduler_fanout_43d7459b033e4a47878f48ffa3bbc71c (vhost: /, messages: 0) 2026-03-25 06:24:31.636101 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-scheduler_fanout_90d90f1fcc854dc09ed84a0307f73a5c (vhost: /, messages: 0) 2026-03-25 06:24:31.636468 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-scheduler_fanout_9bcfe7972dbd4c8186a5759697dfee8b (vhost: /, messages: 0) 2026-03-25 06:24:31.636708 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-03-25 06:24:31.636729 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-03-25 06:24:31.636855 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-03-25 06:24:31.637032 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-03-25 06:24:31.637195 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-share_fanout_6a92ca1b1e81446aa2a798802ad39815 (vhost: /, messages: 0) 2026-03-25 06:24:31.637215 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-share_fanout_8b79e7d94e0f44bb99ea3ed43cc49517 (vhost: /, messages: 0) 2026-03-25 06:24:31.637603 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - manila-share_fanout_aad0fa895bb54651a321f2f0de49b484 (vhost: /, messages: 0) 2026-03-25 06:24:31.637634 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-03-25 06:24:31.637861 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-03-25 06:24:31.638358 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-03-25 06:24:31.638378 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-03-25 06:24:31.638393 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-03-25 06:24:31.638575 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-03-25 06:24:31.638696 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-03-25 06:24:31.638945 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-03-25 06:24:31.639161 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.639178 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.640342 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.640440 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - octavia_provisioning_v2_fanout_66ce552f1eed453bb00fb5334e6b8dc6 (vhost: /, messages: 0) 2026-03-25 06:24:31.640464 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - octavia_provisioning_v2_fanout_674594d377ec4b12ab72989e495a8008 (vhost: /, messages: 0) 2026-03-25 06:24:31.640503 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - octavia_provisioning_v2_fanout_9f2c51105eed426fbf53f163cdfb37ab (vhost: /, messages: 0) 2026-03-25 06:24:31.640566 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer (vhost: /, messages: 0) 2026-03-25 06:24:31.640580 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.640593 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.640890 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.641978 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer_fanout_2da7c37816ca470ba77238c293d06e6d (vhost: /, messages: 0) 2026-03-25 06:24:31.642148 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer_fanout_4bc2a645f5b344478d40f9accff9e376 (vhost: /, messages: 0) 2026-03-25 06:24:31.642163 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer_fanout_7db3ab2161b542d9b8140dc0adbb0e03 (vhost: /, messages: 0) 2026-03-25 06:24:31.642174 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer_fanout_8c55a01735504078bcb6bec0a4b516c7 (vhost: /, messages: 0) 2026-03-25 06:24:31.642186 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer_fanout_92637c7ef39b4a2c8348000cc8740ef8 (vhost: /, messages: 0) 2026-03-25 06:24:31.642197 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - producer_fanout_b7c3fb59c66649cebb616b9a7ab93cf4 (vhost: /, messages: 0) 2026-03-25 06:24:31.642264 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-03-25 06:24:31.642278 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.642289 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.642300 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.642311 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin_fanout_17364a893d2e487e8bdad4410896dd03 (vhost: /, messages: 0) 2026-03-25 06:24:31.642322 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin_fanout_2cb58fa3cf4e481ab0bcb219025b36fd (vhost: /, messages: 0) 2026-03-25 06:24:31.642332 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin_fanout_3f69f7ed23734646a50c832140923a11 (vhost: /, messages: 0) 2026-03-25 06:24:31.642343 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin_fanout_499fd62243584cbcbe28aa132f451793 (vhost: /, messages: 0) 2026-03-25 06:24:31.642354 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin_fanout_4bc85ebf80b343d780f81580ba4ba224 (vhost: /, messages: 0) 2026-03-25 06:24:31.642364 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin_fanout_7847419d0a004880a97bfa7dc79b556a (vhost: /, messages: 0) 2026-03-25 06:24:31.642375 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin_fanout_80a8d97c31f74dbcb95bef8abc691e09 (vhost: /, messages: 0) 2026-03-25 06:24:31.642386 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin_fanout_88bb3acf16894fd69dd71f6d1b7cc2ec (vhost: /, messages: 0) 2026-03-25 06:24:31.642397 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-plugin_fanout_e123c02c187a416ca68ff29c7a95b0b0 (vhost: /, messages: 0) 2026-03-25 06:24:31.642407 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-03-25 06:24:31.642426 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.642437 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.642448 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.642579 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_0a1c6a1530a04d5aa4e160949712f6af (vhost: /, messages: 0) 2026-03-25 06:24:31.642702 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_10ecd76d790041989c30c74667d69fb3 (vhost: /, messages: 0) 2026-03-25 06:24:31.642720 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_1bf32610e30f48fba442050242a5b0f3 (vhost: /, messages: 0) 2026-03-25 06:24:31.642904 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_32d3061865d343afb5c20e37c44679e8 (vhost: /, messages: 0) 2026-03-25 06:24:31.643196 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_39d435c61a254a6596e8a4e8e7a16c37 (vhost: /, messages: 0) 2026-03-25 06:24:31.643221 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_3a0d346ed5234a92b5a0cbbdac0c60ed (vhost: /, messages: 0) 2026-03-25 06:24:31.643576 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_40e43374a6474ba288272eebceceef6b (vhost: /, messages: 0) 2026-03-25 06:24:31.643598 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_4f60bdc6b07b441e923dfaf5f3eacb04 (vhost: /, messages: 0) 2026-03-25 06:24:31.643610 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_71465ebc70e645deb886ff7519935791 (vhost: /, messages: 0) 2026-03-25 06:24:31.643758 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_7a749a3028594da58fff169ce097fdfb (vhost: /, messages: 0) 2026-03-25 06:24:31.644126 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_840edfbfcdba460486fd1ae70fa46132 (vhost: /, messages: 0) 2026-03-25 06:24:31.644145 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_9bab4dfacb794157bf75dd1f5cadec04 (vhost: /, messages: 0) 2026-03-25 06:24:31.644209 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_a1c6764adfbc49fe884f6d30638e3650 (vhost: /, messages: 0) 2026-03-25 06:24:31.644226 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_d88817b8e3ef4f55ae7bd7db04b457d2 (vhost: /, messages: 0) 2026-03-25 06:24:31.644432 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_e0e0bba969c0429a9a38e85e7ba6cc1f (vhost: /, messages: 0) 2026-03-25 06:24:31.644449 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_e5c4aa5f90a84f449fb55f8b26e08e38 (vhost: /, messages: 0) 2026-03-25 06:24:31.644523 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_f3cc1975ccdb4057a3dfbf2455e4c87d (vhost: /, messages: 0) 2026-03-25 06:24:31.644768 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-reports-plugin_fanout_fc3277bddf404e3f9da926f50a7fcc3d (vhost: /, messages: 0) 2026-03-25 06:24:31.644786 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-03-25 06:24:31.644970 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.645260 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.645280 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.645384 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions_fanout_10878ad6ec3b4ecc9b9c652c0be64c2e (vhost: /, messages: 0) 2026-03-25 06:24:31.645604 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions_fanout_18c8741cacb8410baa3aebe44889113b (vhost: /, messages: 0) 2026-03-25 06:24:31.645695 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions_fanout_3b3d662dce2c4d4388f524d5c8f81da8 (vhost: /, messages: 0) 2026-03-25 06:24:31.645716 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions_fanout_43f4301411e54731a5fa516aeeb7236d (vhost: /, messages: 0) 2026-03-25 06:24:31.645894 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions_fanout_5601841e0a0b4c03a276873d2cde6151 (vhost: /, messages: 0) 2026-03-25 06:24:31.645909 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions_fanout_637558dc67e04720a62e6ae00276b271 (vhost: /, messages: 0) 2026-03-25 06:24:31.646277 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions_fanout_9206f2fd2ce5463c8f69b6c085f19617 (vhost: /, messages: 0) 2026-03-25 06:24:31.646296 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - q-server-resource-versions_fanout_fcfe8ad2dc7b465a8265341746fdd386 (vhost: /, messages: 0) 2026-03-25 06:24:31.646359 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_057dc0571274422a92e4e141ccf2a496 (vhost: /, messages: 0) 2026-03-25 06:24:31.646613 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_08bc111a042b46459fb339cfd5127d1c (vhost: /, messages: 0) 2026-03-25 06:24:31.646632 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_103f0799dd744c8a9558b8702883b3e4 (vhost: /, messages: 0) 2026-03-25 06:24:31.646794 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_164c55cb430442df9ceb5bbad939f001 (vhost: /, messages: 0) 2026-03-25 06:24:31.650821 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_42080c58ce9b4610a2d6844f15ba2e35 (vhost: /, messages: 0) 2026-03-25 06:24:31.650856 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_46de691176fa46e582edb9d19485aa59 (vhost: /, messages: 0) 2026-03-25 06:24:31.650867 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_7f7545b0e67b4a7f8a8f47a1d3c8d892 (vhost: /, messages: 0) 2026-03-25 06:24:31.650876 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_887d7eeb06294e8abb7efbf8b3ce750d (vhost: /, messages: 0) 2026-03-25 06:24:31.650886 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_a50cba5d7b9b426b988e1330e5adf853 (vhost: /, messages: 0) 2026-03-25 06:24:31.650896 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_a6d97dbe3eb64ae99eadf5527bb8553e (vhost: /, messages: 0) 2026-03-25 06:24:31.650907 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_a7adc41b3aed4e86a7af6d60e4f82230 (vhost: /, messages: 0) 2026-03-25 06:24:31.650916 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_b083621655374967bf7f9444947a9722 (vhost: /, messages: 0) 2026-03-25 06:24:31.650925 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_c085f7f0668048b0ab519e79fdae9278 (vhost: /, messages: 0) 2026-03-25 06:24:31.650935 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_cd93fef7106848108509a2f47f1d1e81 (vhost: /, messages: 0) 2026-03-25 06:24:31.650944 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_d1933e23a6664c53924599b78a9fbe17 (vhost: /, messages: 0) 2026-03-25 06:24:31.650953 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_d2da5b7252f9489487533794296cec1b (vhost: /, messages: 0) 2026-03-25 06:24:31.650963 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_e88c9c8094b6420ba1e141f28cdbd543 (vhost: /, messages: 0) 2026-03-25 06:24:31.650973 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_e9192f1abf7b432bbb44b897e2029b4f (vhost: /, messages: 0) 2026-03-25 06:24:31.650982 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - reply_f35cdb747b0d4fe8bbfebe5208a6769e (vhost: /, messages: 0) 2026-03-25 06:24:31.651004 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-03-25 06:24:31.651015 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.651025 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.651034 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.651044 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler_fanout_22a8ee9fcfdf4181a69111c7b12b75d7 (vhost: /, messages: 0) 2026-03-25 06:24:31.651054 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler_fanout_2bfeaa3289d54fc286cb4d3bf7cc0fa4 (vhost: /, messages: 0) 2026-03-25 06:24:31.651063 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler_fanout_c7b10efd24d044278e76c62d25a7a3a8 (vhost: /, messages: 0) 2026-03-25 06:24:31.651074 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler_fanout_d9aaabf7846d4f98b4dfb6d703f276d7 (vhost: /, messages: 0) 2026-03-25 06:24:31.651083 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler_fanout_efd915d3d06848aab9d9a4b81c9ed485 (vhost: /, messages: 0) 2026-03-25 06:24:31.651093 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - scheduler_fanout_f20fe736da4542998bcef127a37b0059 (vhost: /, messages: 0) 2026-03-25 06:24:31.651102 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker (vhost: /, messages: 0) 2026-03-25 06:24:31.651112 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-03-25 06:24:31.651122 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-03-25 06:24:31.651131 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-03-25 06:24:31.651141 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker_fanout_24fbb3e9a7df4fb1b810bc6d41a838b9 (vhost: /, messages: 0) 2026-03-25 06:24:31.651150 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker_fanout_7e797de84003429da8e282d09c05c2fb (vhost: /, messages: 0) 2026-03-25 06:24:31.651160 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker_fanout_8b92e793ee414c758f378d02ddc4ac15 (vhost: /, messages: 0) 2026-03-25 06:24:31.651170 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker_fanout_9dd328106a364653bd8ac5d70d6360d5 (vhost: /, messages: 0) 2026-03-25 06:24:31.651204 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker_fanout_b99adf2bcc6a44d7b720d59c628957ae (vhost: /, messages: 0) 2026-03-25 06:24:31.651215 | orchestrator | 2026-03-25 06:24:31 | INFO  |  - worker_fanout_eece2435daf34831b9372001bc593a53 (vhost: /, messages: 0) 2026-03-25 06:24:31.966374 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-03-25 06:24:34.013923 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-03-25 06:24:34.014129 | orchestrator | [--no-close-connections] [--quorum] 2026-03-25 06:24:34.014158 | orchestrator | [--vhost VHOST] 2026-03-25 06:24:34.014171 | orchestrator | [{list,delete,prepare,check}] 2026-03-25 06:24:34.014183 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-03-25 06:24:34.014196 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-03-25 06:24:34.759303 | orchestrator | ERROR 2026-03-25 06:24:34.759523 | orchestrator | { 2026-03-25 06:24:34.759560 | orchestrator | "delta": "2:07:12.054288", 2026-03-25 06:24:34.759584 | orchestrator | "end": "2026-03-25 06:24:34.344254", 2026-03-25 06:24:34.759605 | orchestrator | "msg": "non-zero return code", 2026-03-25 06:24:34.759625 | orchestrator | "rc": 2, 2026-03-25 06:24:34.759643 | orchestrator | "start": "2026-03-25 04:17:22.289966" 2026-03-25 06:24:34.759661 | orchestrator | } failure 2026-03-25 06:24:35.016499 | 2026-03-25 06:24:35.016626 | PLAY RECAP 2026-03-25 06:24:35.016934 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-03-25 06:24:35.017011 | 2026-03-25 06:24:35.311431 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-03-25 06:24:35.315563 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-25 06:24:36.052944 | 2026-03-25 06:24:36.053210 | PLAY [Post output play] 2026-03-25 06:24:36.070820 | 2026-03-25 06:24:36.070985 | LOOP [stage-output : Register sources] 2026-03-25 06:24:36.141538 | 2026-03-25 06:24:36.141862 | TASK [stage-output : Check sudo] 2026-03-25 06:24:37.016227 | orchestrator | sudo: a password is required 2026-03-25 06:24:37.179737 | orchestrator | ok: Runtime: 0:00:00.016484 2026-03-25 06:24:37.195296 | 2026-03-25 06:24:37.195458 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-25 06:24:37.243449 | 2026-03-25 06:24:37.243692 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-25 06:24:37.321545 | orchestrator | ok 2026-03-25 06:24:37.330364 | 2026-03-25 06:24:37.330494 | LOOP [stage-output : Ensure target folders exist] 2026-03-25 06:24:37.793169 | orchestrator | ok: "docs" 2026-03-25 06:24:37.793476 | 2026-03-25 06:24:38.043118 | orchestrator | ok: "artifacts" 2026-03-25 06:24:38.299997 | orchestrator | ok: "logs" 2026-03-25 06:24:38.322883 | 2026-03-25 06:24:38.323074 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-25 06:24:38.364129 | 2026-03-25 06:24:38.364379 | TASK [stage-output : Make all log files readable] 2026-03-25 06:24:38.654745 | orchestrator | ok 2026-03-25 06:24:38.663213 | 2026-03-25 06:24:38.663347 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-25 06:24:38.700022 | orchestrator | skipping: Conditional result was False 2026-03-25 06:24:38.717367 | 2026-03-25 06:24:38.717550 | TASK [stage-output : Discover log files for compression] 2026-03-25 06:24:38.753022 | orchestrator | skipping: Conditional result was False 2026-03-25 06:24:38.768224 | 2026-03-25 06:24:38.768384 | LOOP [stage-output : Archive everything from logs] 2026-03-25 06:24:38.815409 | 2026-03-25 06:24:38.815661 | PLAY [Post cleanup play] 2026-03-25 06:24:38.824697 | 2026-03-25 06:24:38.824823 | TASK [Set cloud fact (Zuul deployment)] 2026-03-25 06:24:38.886026 | orchestrator | ok 2026-03-25 06:24:38.904704 | 2026-03-25 06:24:38.904960 | TASK [Set cloud fact (local deployment)] 2026-03-25 06:24:38.932431 | orchestrator | skipping: Conditional result was False 2026-03-25 06:24:38.947678 | 2026-03-25 06:24:38.947888 | TASK [Clean the cloud environment] 2026-03-25 06:24:39.534310 | orchestrator | 2026-03-25 06:24:39 - clean up servers 2026-03-25 06:24:40.293237 | orchestrator | 2026-03-25 06:24:40 - testbed-manager 2026-03-25 06:24:40.382462 | orchestrator | 2026-03-25 06:24:40 - testbed-node-0 2026-03-25 06:24:40.476369 | orchestrator | 2026-03-25 06:24:40 - testbed-node-3 2026-03-25 06:24:40.563055 | orchestrator | 2026-03-25 06:24:40 - testbed-node-2 2026-03-25 06:24:40.659806 | orchestrator | 2026-03-25 06:24:40 - testbed-node-5 2026-03-25 06:24:40.766110 | orchestrator | 2026-03-25 06:24:40 - testbed-node-4 2026-03-25 06:24:40.862400 | orchestrator | 2026-03-25 06:24:40 - testbed-node-1 2026-03-25 06:24:40.950005 | orchestrator | 2026-03-25 06:24:40 - clean up keypairs 2026-03-25 06:24:40.972509 | orchestrator | 2026-03-25 06:24:40 - testbed 2026-03-25 06:24:41.001932 | orchestrator | 2026-03-25 06:24:41 - wait for servers to be gone 2026-03-25 06:24:49.802234 | orchestrator | 2026-03-25 06:24:49 - clean up ports 2026-03-25 06:24:49.988156 | orchestrator | 2026-03-25 06:24:49 - 1ab17c7c-2549-4da4-a146-779432fb1b7d 2026-03-25 06:24:50.224972 | orchestrator | 2026-03-25 06:24:50 - 7621e2cc-ae7c-4b0e-b9f5-bdd713117f41 2026-03-25 06:24:50.539800 | orchestrator | 2026-03-25 06:24:50 - a665f337-096c-465b-b6bc-a9dacc616441 2026-03-25 06:24:50.740626 | orchestrator | 2026-03-25 06:24:50 - a8c89ded-c61a-4969-b963-29d4a9d385e6 2026-03-25 06:24:51.467896 | orchestrator | 2026-03-25 06:24:51 - ca8681fe-7223-4195-a961-24b6ba567b89 2026-03-25 06:24:51.758549 | orchestrator | 2026-03-25 06:24:51 - e1501d83-a934-4f3d-a995-eb36a67e4ef7 2026-03-25 06:24:52.144653 | orchestrator | 2026-03-25 06:24:52 - e90ee1b6-155d-4851-b3a8-4e740cc06b0b 2026-03-25 06:24:52.349685 | orchestrator | 2026-03-25 06:24:52 - clean up volumes 2026-03-25 06:24:52.467505 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-4-node-base 2026-03-25 06:24:52.508966 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-3-node-base 2026-03-25 06:24:52.547219 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-2-node-base 2026-03-25 06:24:52.585869 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-0-node-base 2026-03-25 06:24:52.626647 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-1-node-base 2026-03-25 06:24:52.670108 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-5-node-base 2026-03-25 06:24:52.711242 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-manager-base 2026-03-25 06:24:52.752596 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-5-node-5 2026-03-25 06:24:52.791768 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-0-node-3 2026-03-25 06:24:52.833309 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-3-node-3 2026-03-25 06:24:52.874746 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-6-node-3 2026-03-25 06:24:52.915069 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-8-node-5 2026-03-25 06:24:52.954290 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-4-node-4 2026-03-25 06:24:52.994568 | orchestrator | 2026-03-25 06:24:52 - testbed-volume-7-node-4 2026-03-25 06:24:53.036363 | orchestrator | 2026-03-25 06:24:53 - testbed-volume-1-node-4 2026-03-25 06:24:53.077750 | orchestrator | 2026-03-25 06:24:53 - testbed-volume-2-node-5 2026-03-25 06:24:53.128957 | orchestrator | 2026-03-25 06:24:53 - disconnect routers 2026-03-25 06:24:53.238749 | orchestrator | 2026-03-25 06:24:53 - testbed 2026-03-25 06:24:54.204206 | orchestrator | 2026-03-25 06:24:54 - clean up subnets 2026-03-25 06:24:54.279819 | orchestrator | 2026-03-25 06:24:54 - subnet-testbed-management 2026-03-25 06:24:54.456768 | orchestrator | 2026-03-25 06:24:54 - clean up networks 2026-03-25 06:24:54.631324 | orchestrator | 2026-03-25 06:24:54 - net-testbed-management 2026-03-25 06:24:54.914484 | orchestrator | 2026-03-25 06:24:54 - clean up security groups 2026-03-25 06:24:54.958391 | orchestrator | 2026-03-25 06:24:54 - testbed-management 2026-03-25 06:24:55.086193 | orchestrator | 2026-03-25 06:24:55 - testbed-node 2026-03-25 06:24:55.196503 | orchestrator | 2026-03-25 06:24:55 - clean up floating ips 2026-03-25 06:24:55.228037 | orchestrator | 2026-03-25 06:24:55 - 81.163.192.44 2026-03-25 06:24:55.594492 | orchestrator | 2026-03-25 06:24:55 - clean up routers 2026-03-25 06:24:55.707717 | orchestrator | 2026-03-25 06:24:55 - testbed 2026-03-25 06:24:57.506466 | orchestrator | ok: Runtime: 0:00:17.802966 2026-03-25 06:24:57.510914 | 2026-03-25 06:24:57.511079 | PLAY RECAP 2026-03-25 06:24:57.511206 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-25 06:24:57.511272 | 2026-03-25 06:24:57.643271 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-25 06:24:57.645738 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-25 06:24:58.411562 | 2026-03-25 06:24:58.411733 | PLAY [Cleanup play] 2026-03-25 06:24:58.427697 | 2026-03-25 06:24:58.427868 | TASK [Set cloud fact (Zuul deployment)] 2026-03-25 06:24:58.482966 | orchestrator | ok 2026-03-25 06:24:58.492119 | 2026-03-25 06:24:58.492280 | TASK [Set cloud fact (local deployment)] 2026-03-25 06:24:58.527063 | orchestrator | skipping: Conditional result was False 2026-03-25 06:24:58.542731 | 2026-03-25 06:24:58.542972 | TASK [Clean the cloud environment] 2026-03-25 06:24:59.666284 | orchestrator | 2026-03-25 06:24:59 - clean up servers 2026-03-25 06:25:00.141036 | orchestrator | 2026-03-25 06:25:00 - clean up keypairs 2026-03-25 06:25:00.160832 | orchestrator | 2026-03-25 06:25:00 - wait for servers to be gone 2026-03-25 06:25:00.204886 | orchestrator | 2026-03-25 06:25:00 - clean up ports 2026-03-25 06:25:00.281073 | orchestrator | 2026-03-25 06:25:00 - clean up volumes 2026-03-25 06:25:00.344650 | orchestrator | 2026-03-25 06:25:00 - disconnect routers 2026-03-25 06:25:00.367780 | orchestrator | 2026-03-25 06:25:00 - clean up subnets 2026-03-25 06:25:00.390095 | orchestrator | 2026-03-25 06:25:00 - clean up networks 2026-03-25 06:25:00.544571 | orchestrator | 2026-03-25 06:25:00 - clean up security groups 2026-03-25 06:25:00.580279 | orchestrator | 2026-03-25 06:25:00 - clean up floating ips 2026-03-25 06:25:00.606583 | orchestrator | 2026-03-25 06:25:00 - clean up routers 2026-03-25 06:25:01.081123 | orchestrator | ok: Runtime: 0:00:01.354248 2026-03-25 06:25:01.086431 | 2026-03-25 06:25:01.086598 | PLAY RECAP 2026-03-25 06:25:01.086719 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-25 06:25:01.086929 | 2026-03-25 06:25:01.240015 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-25 06:25:01.242455 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-25 06:25:02.035785 | 2026-03-25 06:25:02.035954 | PLAY [Base post-fetch] 2026-03-25 06:25:02.051365 | 2026-03-25 06:25:02.051504 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-25 06:25:02.097537 | orchestrator | skipping: Conditional result was False 2026-03-25 06:25:02.112948 | 2026-03-25 06:25:02.113180 | TASK [fetch-output : Set log path for single node] 2026-03-25 06:25:02.151439 | orchestrator | ok 2026-03-25 06:25:02.160119 | 2026-03-25 06:25:02.160256 | LOOP [fetch-output : Ensure local output dirs] 2026-03-25 06:25:02.648375 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/ec9043456e244bf38728792be429bfda/work/logs" 2026-03-25 06:25:02.907875 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ec9043456e244bf38728792be429bfda/work/artifacts" 2026-03-25 06:25:03.163620 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ec9043456e244bf38728792be429bfda/work/docs" 2026-03-25 06:25:03.179136 | 2026-03-25 06:25:03.179261 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-25 06:25:04.110517 | orchestrator | changed: .d..t...... ./ 2026-03-25 06:25:04.110786 | orchestrator | changed: All items complete 2026-03-25 06:25:04.110881 | 2026-03-25 06:25:04.816571 | orchestrator | changed: .d..t...... ./ 2026-03-25 06:25:05.559904 | orchestrator | changed: .d..t...... ./ 2026-03-25 06:25:05.579398 | 2026-03-25 06:25:05.579530 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-25 06:25:05.612234 | orchestrator | skipping: Conditional result was False 2026-03-25 06:25:05.618297 | orchestrator | skipping: Conditional result was False 2026-03-25 06:25:05.640145 | 2026-03-25 06:25:05.640318 | PLAY RECAP 2026-03-25 06:25:05.640417 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-25 06:25:05.640461 | 2026-03-25 06:25:05.790688 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-25 06:25:05.792588 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-25 06:25:06.631493 | 2026-03-25 06:25:06.631652 | PLAY [Base post] 2026-03-25 06:25:06.646308 | 2026-03-25 06:25:06.646444 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-25 06:25:07.599198 | orchestrator | changed 2026-03-25 06:25:07.610032 | 2026-03-25 06:25:07.610151 | PLAY RECAP 2026-03-25 06:25:07.610224 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-25 06:25:07.610291 | 2026-03-25 06:25:07.730108 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-25 06:25:07.731290 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-25 06:25:08.558028 | 2026-03-25 06:25:08.558248 | PLAY [Base post-logs] 2026-03-25 06:25:08.569140 | 2026-03-25 06:25:08.569285 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-25 06:25:09.068642 | localhost | changed 2026-03-25 06:25:09.079113 | 2026-03-25 06:25:09.079260 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-25 06:25:09.115124 | localhost | ok 2026-03-25 06:25:09.119325 | 2026-03-25 06:25:09.119446 | TASK [Set zuul-log-path fact] 2026-03-25 06:25:09.136528 | localhost | ok 2026-03-25 06:25:09.147944 | 2026-03-25 06:25:09.148078 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-25 06:25:09.175302 | localhost | ok 2026-03-25 06:25:09.179266 | 2026-03-25 06:25:09.179391 | TASK [upload-logs : Create log directories] 2026-03-25 06:25:09.684153 | localhost | changed 2026-03-25 06:25:09.689196 | 2026-03-25 06:25:09.689358 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-25 06:25:10.198068 | localhost -> localhost | ok: Runtime: 0:00:00.007181 2026-03-25 06:25:10.207451 | 2026-03-25 06:25:10.207664 | TASK [upload-logs : Upload logs to log server] 2026-03-25 06:25:10.805086 | localhost | Output suppressed because no_log was given 2026-03-25 06:25:10.808351 | 2026-03-25 06:25:10.808531 | LOOP [upload-logs : Compress console log and json output] 2026-03-25 06:25:10.869566 | localhost | skipping: Conditional result was False 2026-03-25 06:25:10.895575 | localhost | skipping: Conditional result was False 2026-03-25 06:25:10.905420 | 2026-03-25 06:25:10.905588 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-25 06:25:10.969590 | localhost | skipping: Conditional result was False 2026-03-25 06:25:10.969905 | 2026-03-25 06:25:10.974716 | localhost | skipping: Conditional result was False 2026-03-25 06:25:10.988797 | 2026-03-25 06:25:10.989048 | LOOP [upload-logs : Upload console log and json output]